This job view page is being replaced by Spyglass soon. Check out the new job view.
PRyue9944882: Prune internal clients from CRD apiserver
ResultFAILURE
Tests 1 failed / 2899 succeeded
Started2019-12-03 12:58
Elapsed24m36s
Revision800fa605a3db1d30328b7b456fc0ee97dce2a499
Refs 84005

Test Failures


k8s.io/kubernetes/test/integration/deployment TestDeploymentAvailableCondition 6.27s

go test -v k8s.io/kubernetes/test/integration/deployment -run TestDeploymentAvailableCondition$
=== RUN   TestDeploymentAvailableCondition
W1203 13:17:18.305517  106607 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1203 13:17:18.305541  106607 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I1203 13:17:18.305554  106607 master.go:311] Node port range unspecified. Defaulting to 30000-32767.
I1203 13:17:18.305564  106607 master.go:267] Using reconciler: 
I1203 13:17:18.307572  106607 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.307828  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.307930  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.308810  106607 store.go:1350] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1203 13:17:18.308865  106607 reflector.go:188] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1203 13:17:18.308866  106607 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.309238  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.309259  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.310051  106607 store.go:1350] Monitoring events count at <storage-prefix>//events
I1203 13:17:18.310112  106607 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I1203 13:17:18.310106  106607 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.310359  106607 watch_cache.go:409] Replace watchCache (rev: 21787) 
I1203 13:17:18.310444  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.310462  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.310977  106607 watch_cache.go:409] Replace watchCache (rev: 21787) 
I1203 13:17:18.311391  106607 store.go:1350] Monitoring limitranges count at <storage-prefix>//limitranges
I1203 13:17:18.311489  106607 reflector.go:188] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1203 13:17:18.311583  106607 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.311798  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.311820  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.312335  106607 watch_cache.go:409] Replace watchCache (rev: 21787) 
I1203 13:17:18.312698  106607 store.go:1350] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1203 13:17:18.312888  106607 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.313030  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.313053  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.313124  106607 reflector.go:188] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1203 13:17:18.313991  106607 store.go:1350] Monitoring secrets count at <storage-prefix>//secrets
I1203 13:17:18.314141  106607 watch_cache.go:409] Replace watchCache (rev: 21787) 
I1203 13:17:18.314161  106607 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.314297  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.314314  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.314393  106607 reflector.go:188] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1203 13:17:18.315376  106607 watch_cache.go:409] Replace watchCache (rev: 21787) 
I1203 13:17:18.316824  106607 store.go:1350] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1203 13:17:18.316884  106607 reflector.go:188] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1203 13:17:18.317002  106607 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.317145  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.317163  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.317838  106607 store.go:1350] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1203 13:17:18.317914  106607 reflector.go:188] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1203 13:17:18.318018  106607 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.318133  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.318152  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.318453  106607 watch_cache.go:409] Replace watchCache (rev: 21788) 
I1203 13:17:18.318768  106607 store.go:1350] Monitoring configmaps count at <storage-prefix>//configmaps
I1203 13:17:18.318872  106607 reflector.go:188] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1203 13:17:18.318929  106607 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.319095  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.319115  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.319771  106607 store.go:1350] Monitoring namespaces count at <storage-prefix>//namespaces
I1203 13:17:18.319789  106607 watch_cache.go:409] Replace watchCache (rev: 21788) 
I1203 13:17:18.319930  106607 watch_cache.go:409] Replace watchCache (rev: 21788) 
I1203 13:17:18.319927  106607 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.319953  106607 reflector.go:188] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1203 13:17:18.320054  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.320070  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.321034  106607 store.go:1350] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1203 13:17:18.321052  106607 watch_cache.go:409] Replace watchCache (rev: 21788) 
I1203 13:17:18.321120  106607 reflector.go:188] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1203 13:17:18.321222  106607 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.321545  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.321581  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.322184  106607 watch_cache.go:409] Replace watchCache (rev: 21788) 
I1203 13:17:18.322293  106607 store.go:1350] Monitoring nodes count at <storage-prefix>//minions
I1203 13:17:18.322333  106607 reflector.go:188] Listing and watching *core.Node from storage/cacher.go:/minions
I1203 13:17:18.322514  106607 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.322621  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.322660  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.323408  106607 watch_cache.go:409] Replace watchCache (rev: 21788) 
I1203 13:17:18.323595  106607 store.go:1350] Monitoring pods count at <storage-prefix>//pods
I1203 13:17:18.323627  106607 reflector.go:188] Listing and watching *core.Pod from storage/cacher.go:/pods
I1203 13:17:18.323838  106607 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.323984  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.324004  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.324713  106607 store.go:1350] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1203 13:17:18.324782  106607 reflector.go:188] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1203 13:17:18.324880  106607 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.324985  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.325002  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.325675  106607 store.go:1350] Monitoring services count at <storage-prefix>//services/specs
I1203 13:17:18.325722  106607 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.325842  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.325859  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.325939  106607 reflector.go:188] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1203 13:17:18.326302  106607 watch_cache.go:409] Replace watchCache (rev: 21788) 
I1203 13:17:18.326382  106607 watch_cache.go:409] Replace watchCache (rev: 21788) 
I1203 13:17:18.327462  106607 watch_cache.go:409] Replace watchCache (rev: 21788) 
I1203 13:17:18.327559  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.327577  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.328514  106607 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.328693  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.328719  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.329335  106607 store.go:1350] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1203 13:17:18.329360  106607 rest.go:113] the default service ipfamily for this cluster is: IPv4
I1203 13:17:18.329493  106607 reflector.go:188] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1203 13:17:18.330010  106607 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.330267  106607 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.330534  106607 watch_cache.go:409] Replace watchCache (rev: 21788) 
I1203 13:17:18.331124  106607 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.331947  106607 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.332588  106607 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.333283  106607 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.333665  106607 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.333774  106607 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.333922  106607 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.334804  106607 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.335528  106607 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.336003  106607 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.337272  106607 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.337719  106607 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.338532  106607 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.338840  106607 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.339352  106607 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.339719  106607 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.339928  106607 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.340268  106607 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.340563  106607 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.340851  106607 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.341137  106607 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.342053  106607 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.342309  106607 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.343005  106607 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.343754  106607 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.343987  106607 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.344330  106607 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.345020  106607 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.345256  106607 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.345870  106607 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.346422  106607 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.347152  106607 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.347758  106607 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.347994  106607 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.348132  106607 master.go:496] Skipping disabled API group "auditregistration.k8s.io".
I1203 13:17:18.348157  106607 master.go:507] Enabling API group "authentication.k8s.io".
I1203 13:17:18.348174  106607 master.go:507] Enabling API group "authorization.k8s.io".
I1203 13:17:18.348332  106607 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.348697  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.348777  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.349735  106607 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1203 13:17:18.349890  106607 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1203 13:17:18.349922  106607 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.350066  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.350097  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.351037  106607 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1203 13:17:18.351215  106607 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.351341  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.351364  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.351367  106607 watch_cache.go:409] Replace watchCache (rev: 21789) 
I1203 13:17:18.351401  106607 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1203 13:17:18.352201  106607 store.go:1350] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1203 13:17:18.352223  106607 master.go:507] Enabling API group "autoscaling".
I1203 13:17:18.352304  106607 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1203 13:17:18.352444  106607 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.352686  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.352710  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.353221  106607 watch_cache.go:409] Replace watchCache (rev: 21789) 
I1203 13:17:18.353506  106607 store.go:1350] Monitoring jobs.batch count at <storage-prefix>//jobs
I1203 13:17:18.353585  106607 reflector.go:188] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1203 13:17:18.353708  106607 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.353878  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.353909  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.354750  106607 store.go:1350] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1203 13:17:18.354781  106607 master.go:507] Enabling API group "batch".
I1203 13:17:18.354841  106607 watch_cache.go:409] Replace watchCache (rev: 21789) 
I1203 13:17:18.354883  106607 reflector.go:188] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1203 13:17:18.355405  106607 watch_cache.go:409] Replace watchCache (rev: 21789) 
I1203 13:17:18.354924  106607 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.355814  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.355837  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.356972  106607 store.go:1350] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1203 13:17:18.357000  106607 master.go:507] Enabling API group "certificates.k8s.io".
I1203 13:17:18.357069  106607 reflector.go:188] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1203 13:17:18.357238  106607 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.357414  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.357466  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.358110  106607 store.go:1350] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1203 13:17:18.358187  106607 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1203 13:17:18.358277  106607 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.358397  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.358415  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.358789  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.359080  106607 store.go:1350] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1203 13:17:18.359102  106607 master.go:507] Enabling API group "coordination.k8s.io".
I1203 13:17:18.359320  106607 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.359390  106607 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1203 13:17:18.359450  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.359516  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.359575  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.360029  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.360484  106607 store.go:1350] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices
I1203 13:17:18.360509  106607 master.go:507] Enabling API group "discovery.k8s.io".
I1203 13:17:18.360560  106607 reflector.go:188] Listing and watching *discovery.EndpointSlice from storage/cacher.go:/endpointslices
I1203 13:17:18.361005  106607 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.361405  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.361429  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.362307  106607 store.go:1350] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1203 13:17:18.362379  106607 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1203 13:17:18.362657  106607 master.go:507] Enabling API group "extensions".
I1203 13:17:18.362832  106607 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.363032  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.363064  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.363910  106607 store.go:1350] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1203 13:17:18.364009  106607 reflector.go:188] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1203 13:17:18.364058  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.364252  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.364305  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.364967  106607 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.365371  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.365574  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.365599  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.366932  106607 store.go:1350] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1203 13:17:18.366951  106607 master.go:507] Enabling API group "networking.k8s.io".
I1203 13:17:18.367021  106607 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1203 13:17:18.367362  106607 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.367482  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.367499  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.367904  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.368202  106607 store.go:1350] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1203 13:17:18.368222  106607 master.go:507] Enabling API group "node.k8s.io".
I1203 13:17:18.368316  106607 reflector.go:188] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1203 13:17:18.368378  106607 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.368479  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.368502  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.369211  106607 store.go:1350] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1203 13:17:18.369268  106607 reflector.go:188] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1203 13:17:18.369329  106607 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.369423  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.369436  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.369707  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.370024  106607 store.go:1350] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1203 13:17:18.370049  106607 master.go:507] Enabling API group "policy".
I1203 13:17:18.370117  106607 reflector.go:188] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1203 13:17:18.370225  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.370148  106607 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.370749  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.370841  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.371311  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.371879  106607 store.go:1350] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1203 13:17:18.372100  106607 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.372278  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.372302  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.372402  106607 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1203 13:17:18.373123  106607 store.go:1350] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1203 13:17:18.373168  106607 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.373271  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.373291  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.373373  106607 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1203 13:17:18.373838  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.374161  106607 store.go:1350] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1203 13:17:18.374241  106607 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1203 13:17:18.374514  106607 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.374601  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.374842  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.374869  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.375297  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.376144  106607 store.go:1350] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1203 13:17:18.376197  106607 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1203 13:17:18.376207  106607 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.376328  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.376347  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.377201  106607 watch_cache.go:409] Replace watchCache (rev: 21790) 
I1203 13:17:18.377508  106607 store.go:1350] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1203 13:17:18.377610  106607 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1203 13:17:18.377704  106607 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.377829  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.377857  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.378471  106607 store.go:1350] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1203 13:17:18.378503  106607 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1203 13:17:18.378556  106607 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.378844  106607 watch_cache.go:409] Replace watchCache (rev: 21791) 
I1203 13:17:18.379137  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.379164  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.379445  106607 watch_cache.go:409] Replace watchCache (rev: 21791) 
I1203 13:17:18.380011  106607 store.go:1350] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1203 13:17:18.380226  106607 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1203 13:17:18.380207  106607 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.380388  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.380555  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.381286  106607 watch_cache.go:409] Replace watchCache (rev: 21791) 
I1203 13:17:18.381403  106607 store.go:1350] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1203 13:17:18.381454  106607 master.go:507] Enabling API group "rbac.authorization.k8s.io".
I1203 13:17:18.381679  106607 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1203 13:17:18.383026  106607 watch_cache.go:409] Replace watchCache (rev: 21791) 
I1203 13:17:18.383707  106607 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.383884  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.383909  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.385355  106607 store.go:1350] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1203 13:17:18.385391  106607 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1203 13:17:18.385523  106607 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.385679  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.385708  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.386485  106607 watch_cache.go:409] Replace watchCache (rev: 21792) 
I1203 13:17:18.386608  106607 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1203 13:17:18.386818  106607 store.go:1350] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1203 13:17:18.386846  106607 master.go:507] Enabling API group "scheduling.k8s.io".
I1203 13:17:18.387202  106607 master.go:496] Skipping disabled API group "settings.k8s.io".
I1203 13:17:18.387452  106607 watch_cache.go:409] Replace watchCache (rev: 21792) 
I1203 13:17:18.387745  106607 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.387884  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.387909  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.388475  106607 store.go:1350] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1203 13:17:18.388544  106607 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1203 13:17:18.388667  106607 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.388770  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.388794  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.389347  106607 watch_cache.go:409] Replace watchCache (rev: 21792) 
I1203 13:17:18.389766  106607 store.go:1350] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1203 13:17:18.389909  106607 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.390009  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.390032  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.390136  106607 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1203 13:17:18.391233  106607 watch_cache.go:409] Replace watchCache (rev: 21792) 
I1203 13:17:18.391548  106607 store.go:1350] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1203 13:17:18.391665  106607 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1203 13:17:18.392014  106607 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.392129  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.392153  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.392786  106607 watch_cache.go:409] Replace watchCache (rev: 21792) 
I1203 13:17:18.393281  106607 store.go:1350] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1203 13:17:18.393320  106607 reflector.go:188] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1203 13:17:18.393420  106607 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.393516  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.393534  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.394039  106607 watch_cache.go:409] Replace watchCache (rev: 21792) 
I1203 13:17:18.394299  106607 store.go:1350] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1203 13:17:18.394439  106607 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.394529  106607 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1203 13:17:18.394534  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.394672  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.395250  106607 store.go:1350] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1203 13:17:18.395301  106607 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1203 13:17:18.395409  106607 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.395517  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.395540  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.396101  106607 store.go:1350] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1203 13:17:18.396129  106607 master.go:507] Enabling API group "storage.k8s.io".
I1203 13:17:18.396144  106607 master.go:496] Skipping disabled API group "flowcontrol.apiserver.k8s.io".
I1203 13:17:18.396205  106607 watch_cache.go:409] Replace watchCache (rev: 21792) 
I1203 13:17:18.396206  106607 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1203 13:17:18.396371  106607 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.396577  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.396624  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.397231  106607 watch_cache.go:409] Replace watchCache (rev: 21792) 
I1203 13:17:18.397388  106607 store.go:1350] Monitoring deployments.apps count at <storage-prefix>//deployments
I1203 13:17:18.397483  106607 watch_cache.go:409] Replace watchCache (rev: 21792) 
I1203 13:17:18.397524  106607 reflector.go:188] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1203 13:17:18.397541  106607 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.397661  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.397719  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.398807  106607 store.go:1350] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1203 13:17:18.398857  106607 reflector.go:188] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1203 13:17:18.399040  106607 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.399292  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.399315  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.400147  106607 watch_cache.go:409] Replace watchCache (rev: 21793) 
I1203 13:17:18.400295  106607 watch_cache.go:409] Replace watchCache (rev: 21793) 
I1203 13:17:18.400541  106607 store.go:1350] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1203 13:17:18.400617  106607 reflector.go:188] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1203 13:17:18.400934  106607 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.401482  106607 watch_cache.go:409] Replace watchCache (rev: 21793) 
I1203 13:17:18.401540  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.401563  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.402359  106607 store.go:1350] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1203 13:17:18.402553  106607 reflector.go:188] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1203 13:17:18.402961  106607 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.403222  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.403304  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.403537  106607 watch_cache.go:409] Replace watchCache (rev: 21793) 
I1203 13:17:18.404177  106607 store.go:1350] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1203 13:17:18.404308  106607 master.go:507] Enabling API group "apps".
I1203 13:17:18.404233  106607 reflector.go:188] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1203 13:17:18.404600  106607 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.404812  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.404938  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.405332  106607 watch_cache.go:409] Replace watchCache (rev: 21793) 
I1203 13:17:18.405654  106607 store.go:1350] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1203 13:17:18.405737  106607 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1203 13:17:18.406300  106607 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.406588  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.406822  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.406937  106607 watch_cache.go:409] Replace watchCache (rev: 21793) 
I1203 13:17:18.407515  106607 store.go:1350] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1203 13:17:18.407538  106607 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1203 13:17:18.407927  106607 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.408294  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.408434  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.409265  106607 watch_cache.go:409] Replace watchCache (rev: 21793) 
I1203 13:17:18.409813  106607 store.go:1350] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1203 13:17:18.409931  106607 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1203 13:17:18.409941  106607 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.410243  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.410318  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.410816  106607 watch_cache.go:409] Replace watchCache (rev: 21793) 
I1203 13:17:18.411178  106607 store.go:1350] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1203 13:17:18.411201  106607 master.go:507] Enabling API group "admissionregistration.k8s.io".
I1203 13:17:18.411321  106607 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1203 13:17:18.411402  106607 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.412095  106607 watch_cache.go:409] Replace watchCache (rev: 21793) 
I1203 13:17:18.412117  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:18.412134  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:18.412791  106607 store.go:1350] Monitoring events count at <storage-prefix>//events
I1203 13:17:18.412817  106607 master.go:507] Enabling API group "events.k8s.io".
I1203 13:17:18.412836  106607 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I1203 13:17:18.413009  106607 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.413192  106607 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.413477  106607 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.413581  106607 watch_cache.go:409] Replace watchCache (rev: 21793) 
I1203 13:17:18.413652  106607 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.413796  106607 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.413919  106607 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.414123  106607 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.414259  106607 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.414363  106607 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.414495  106607 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.415432  106607 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.415802  106607 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.417894  106607 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.418206  106607 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.419049  106607 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.419347  106607 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.420078  106607 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.420339  106607 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.421420  106607 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.421850  106607 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1203 13:17:18.422002  106607 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1203 13:17:18.422793  106607 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.423088  106607 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.423473  106607 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.424440  106607 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.425367  106607 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.426465  106607 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1203 13:17:18.426672  106607 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
I1203 13:17:18.427686  106607 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.428157  106607 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.429254  106607 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.430264  106607 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.430740  106607 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.431580  106607 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1203 13:17:18.431765  106607 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1203 13:17:18.432808  106607 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.433184  106607 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.433918  106607 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.434895  106607 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.435811  106607 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.438436  106607 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.439421  106607 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.440329  106607 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.441362  106607 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.442127  106607 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.442768  106607 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1203 13:17:18.442844  106607 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1203 13:17:18.443532  106607 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.444195  106607 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1203 13:17:18.444271  106607 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1203 13:17:18.444923  106607 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.445522  106607 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.446578  106607 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.446946  106607 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.447720  106607 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.448298  106607 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.449001  106607 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.449534  106607 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1203 13:17:18.449615  106607 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1203 13:17:18.450462  106607 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.451307  106607 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.451650  106607 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.452423  106607 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.452740  106607 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.453073  106607 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.453821  106607 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.454282  106607 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.454569  106607 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.455445  106607 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.455782  106607 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.456075  106607 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1203 13:17:18.456230  106607 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1203 13:17:18.456288  106607 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1203 13:17:18.457024  106607 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.457759  106607 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.458449  106607 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.459240  106607 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.460377  106607 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"9a02a672-6f56-4361-ba5e-d5c623fd0ee3", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1203 13:17:18.464746  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:18.464840  106607 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1203 13:17:18.464890  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:18.464946  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:18.464983  106607 healthz.go:177] healthz check poststarthook/start-cluster-authentication-info-controller failed: not finished
I1203 13:17:18.465017  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/start-cluster-authentication-info-controller failed: reason withheld
healthz check failed
I1203 13:17:18.465144  106607 httplog.go:90] GET /healthz: (562.415µs) 0 [Go-http-client/1.1 127.0.0.1:34102]
I1203 13:17:18.466022  106607 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.264518ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34104]
W1203 13:17:18.466418  106607 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1203 13:17:18.466583  106607 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1203 13:17:18.466594  106607 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I1203 13:17:18.467064  106607 reflector.go:153] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I1203 13:17:18.467089  106607 reflector.go:188] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I1203 13:17:18.468091  106607 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0: (504.489µs) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34104]
I1203 13:17:18.468836  106607 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=21788 labels= fields= timeout=8m38s
I1203 13:17:18.469519  106607 httplog.go:90] GET /api/v1/services: (1.321645ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:18.474983  106607 httplog.go:90] GET /api/v1/services: (1.22816ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:18.477951  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:18.477982  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:18.477995  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:18.478004  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:18.478029  106607 httplog.go:90] GET /healthz: (165.65µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:18.479700  106607 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.547135ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34110]
I1203 13:17:18.480424  106607 httplog.go:90] GET /api/v1/services: (1.326902ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:18.482523  106607 httplog.go:90] POST /api/v1/namespaces: (1.937955ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34110]
I1203 13:17:18.483419  106607 httplog.go:90] GET /api/v1/services: (3.45134ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:18.492388  106607 httplog.go:90] GET /api/v1/namespaces/kube-public: (8.368766ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:18.495656  106607 httplog.go:90] POST /api/v1/namespaces: (2.79977ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:18.500108  106607 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (3.899673ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:18.507556  106607 httplog.go:90] POST /api/v1/namespaces: (6.508673ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:18.566109  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:18.566141  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:18.566154  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:18.566164  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:18.566208  106607 httplog.go:90] GET /healthz: (259.421µs) 0 [Go-http-client/1.1 127.0.0.1:34114]
I1203 13:17:18.566772  106607 shared_informer.go:227] caches populated
I1203 13:17:18.566855  106607 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
I1203 13:17:18.578959  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:18.578990  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:18.579001  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:18.579011  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:18.579046  106607 httplog.go:90] GET /healthz: (235.893µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:18.666189  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:18.666230  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:18.666243  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:18.666254  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:18.666298  106607 httplog.go:90] GET /healthz: (262.628µs) 0 [Go-http-client/1.1 127.0.0.1:34114]
I1203 13:17:18.678723  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:18.678762  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:18.678776  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:18.678797  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:18.678832  106607 httplog.go:90] GET /healthz: (294.043µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:18.766020  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:18.766048  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:18.766057  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:18.766063  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:18.766091  106607 httplog.go:90] GET /healthz: (192.079µs) 0 [Go-http-client/1.1 127.0.0.1:34114]
I1203 13:17:18.778787  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:18.778820  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:18.778833  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:18.778844  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:18.778885  106607 httplog.go:90] GET /healthz: (273.62µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:18.866024  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:18.866061  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:18.866073  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:18.866084  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:18.866137  106607 httplog.go:90] GET /healthz: (254.454µs) 0 [Go-http-client/1.1 127.0.0.1:34114]
I1203 13:17:18.878746  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:18.878778  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:18.878787  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:18.878795  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:18.878839  106607 httplog.go:90] GET /healthz: (260.665µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:18.966130  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:18.966166  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:18.966199  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:18.966211  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:18.966254  106607 httplog.go:90] GET /healthz: (301.017µs) 0 [Go-http-client/1.1 127.0.0.1:34114]
I1203 13:17:18.978715  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:18.978750  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:18.978774  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:18.978784  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:18.978816  106607 httplog.go:90] GET /healthz: (281.919µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:19.066029  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:19.066066  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.066091  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:19.066109  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.066139  106607 httplog.go:90] GET /healthz: (269.468µs) 0 [Go-http-client/1.1 127.0.0.1:34114]
I1203 13:17:19.078707  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:19.078739  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.078748  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:19.078758  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.078789  106607 httplog.go:90] GET /healthz: (248.313µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:19.165999  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:19.166031  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.166041  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:19.166048  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.166069  106607 httplog.go:90] GET /healthz: (202.98µs) 0 [Go-http-client/1.1 127.0.0.1:34114]
I1203 13:17:19.178793  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:19.178824  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.178834  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:19.178841  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.178869  106607 httplog.go:90] GET /healthz: (315.353µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:19.265988  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:19.266016  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.266025  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:19.266034  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.266061  106607 httplog.go:90] GET /healthz: (207.718µs) 0 [Go-http-client/1.1 127.0.0.1:34114]
I1203 13:17:19.278758  106607 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1203 13:17:19.278792  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.278805  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:19.278814  106607 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.278849  106607 httplog.go:90] GET /healthz: (258.138µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:19.305377  106607 client.go:361] parsed scheme: "endpoint"
I1203 13:17:19.305450  106607 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:17:19.367464  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.367494  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:19.367504  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.367548  106607 httplog.go:90] GET /healthz: (1.580769ms) 0 [Go-http-client/1.1 127.0.0.1:34114]
I1203 13:17:19.379657  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.379691  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:19.379701  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.379738  106607 httplog.go:90] GET /healthz: (1.185434ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:19.466106  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.273742ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:19.466532  106607 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical: (1.675554ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.467267  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.467293  106607 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1203 13:17:19.467301  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.467339  106607 httplog.go:90] GET /healthz: (1.098713ms) 0 [Go-http-client/1.1 127.0.0.1:34388]
I1203 13:17:19.468094  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.190531ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:19.468579  106607 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (1.445378ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.468826  106607 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I1203 13:17:19.469534  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.131384ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:19.469815  106607 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical: (800.224µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.470594  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (716.821µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:19.471580  106607 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (1.509215ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.471803  106607 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I1203 13:17:19.471825  106607 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I1203 13:17:19.472910  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.895668ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34114]
I1203 13:17:19.474098  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (802.254µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.475269  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (688.417µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.476264  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (677.001µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.479582  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (3.005981ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.480169  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.480211  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.480245  106607 httplog.go:90] GET /healthz: (1.671026ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.480846  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (826.7µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.483161  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.89515ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.483397  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1203 13:17:19.484424  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (740.372µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.486264  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.45568ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.486427  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1203 13:17:19.487566  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (958.978µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.489224  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.308327ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.489391  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1203 13:17:19.490428  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (857.151µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.492286  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.457478ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.492507  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1203 13:17:19.493518  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (826.828µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.495484  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.621728ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.495667  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1203 13:17:19.496667  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (875.374µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.498690  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.713296ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.498882  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1203 13:17:19.500311  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.258446ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.501997  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.411624ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.502235  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1203 13:17:19.503468  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.023385ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.505761  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.913837ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.505899  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1203 13:17:19.506723  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (718.566µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.508934  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.832014ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.509128  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1203 13:17:19.510029  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (757.784µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.512095  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.785634ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.512329  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1203 13:17:19.513385  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (900.551µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.515269  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.437631ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.515471  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1203 13:17:19.516593  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (890.924µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.518449  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.476837ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.518727  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1203 13:17:19.520254  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.339505ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.522088  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.522846ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.522285  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1203 13:17:19.523316  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (860.638µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.525074  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.397041ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.525223  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1203 13:17:19.526243  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (858.998µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.527981  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.424666ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.528131  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1203 13:17:19.529471  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.180499ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.531242  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.122914ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.531475  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1203 13:17:19.532469  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (806.077µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.534069  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.249969ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.534315  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1203 13:17:19.535405  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (927.464µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.537849  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.992611ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.538088  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1203 13:17:19.539103  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (829.557µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.540890  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.50216ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.541122  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1203 13:17:19.542474  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (1.103297ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.544356  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.469789ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.544556  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1203 13:17:19.545952  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.020842ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.547476  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.207183ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.547698  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1203 13:17:19.548711  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (812.935µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.551032  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.980164ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.551248  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1203 13:17:19.552183  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (748.846µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.553925  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.422673ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.554094  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1203 13:17:19.555161  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (898.079µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.557403  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.6839ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.557658  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1203 13:17:19.559308  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (1.484452ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.561281  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.554075ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.561537  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1203 13:17:19.562562  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (815.363µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.564933  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.014973ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.565134  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1203 13:17:19.566154  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (779.529µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.566524  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.566544  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.566664  106607 httplog.go:90] GET /healthz: (890.477µs) 0 [Go-http-client/1.1 127.0.0.1:34388]
I1203 13:17:19.568125  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.574509ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.568371  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1203 13:17:19.569170  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (673.102µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.570784  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.308729ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.571027  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1203 13:17:19.572190  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (968.109µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.574064  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.486667ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.574269  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1203 13:17:19.575277  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (829.206µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.577343  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.639953ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.577615  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1203 13:17:19.578942  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.097678ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.579109  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.579134  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.579177  106607 httplog.go:90] GET /healthz: (706.079µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.583108  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.605752ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.583446  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1203 13:17:19.584619  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (823.76µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.586439  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.380888ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.586672  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1203 13:17:19.587760  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (865.105µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.589833  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.716528ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.590162  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1203 13:17:19.591361  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (987.969µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.593215  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.458154ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.593444  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1203 13:17:19.594416  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (766.943µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.596138  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.327651ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.596366  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1203 13:17:19.597978  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.419277ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.600450  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.532278ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.600662  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1203 13:17:19.601595  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (732.493µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.603525  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.573764ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.603705  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1203 13:17:19.604690  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (827.915µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.606726  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.737579ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.606987  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1203 13:17:19.608324  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.062557ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.610044  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.346672ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.610348  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1203 13:17:19.612087  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (863.868µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.614159  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.711771ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.614356  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1203 13:17:19.615589  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.050315ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.617901  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.969173ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.618435  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1203 13:17:19.620090  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.371893ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.622710  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.165273ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.622968  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1203 13:17:19.624156  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (996.055µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.625913  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.449962ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.626207  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1203 13:17:19.627580  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.198315ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.629883  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.728921ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.630146  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1203 13:17:19.631203  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (844.295µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.632905  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.364177ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.633087  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1203 13:17:19.634214  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (870.645µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.635975  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.381199ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.636191  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1203 13:17:19.637190  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (757.849µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.639829  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.217595ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.640032  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1203 13:17:19.641033  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (779.382µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.642989  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.523014ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.643186  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1203 13:17:19.644135  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (771.277µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.645529  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.064662ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.645804  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1203 13:17:19.646818  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (828.266µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.648618  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.33759ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.648855  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1203 13:17:19.667782  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (2.769456ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.668655  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.668747  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.668955  106607 httplog.go:90] GET /healthz: (3.083044ms) 0 [Go-http-client/1.1 127.0.0.1:34388]
I1203 13:17:19.680570  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.680788  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.680962  106607 httplog.go:90] GET /healthz: (1.528208ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.687021  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.100938ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.687659  106607 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1203 13:17:19.706385  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.022029ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.727010  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.925442ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.727251  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1203 13:17:19.746367  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.386298ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.767155  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.250187ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.767835  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1203 13:17:19.769312  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.769403  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.769624  106607 httplog.go:90] GET /healthz: (3.708953ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I1203 13:17:19.779915  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.780050  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.780278  106607 httplog.go:90] GET /healthz: (1.379516ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.789132  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (4.184381ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.808063  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.078897ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.808437  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1203 13:17:19.826204  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.22952ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.846821  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.867364ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.847073  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1203 13:17:19.866217  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.296304ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.866585  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.866615  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.866696  106607 httplog.go:90] GET /healthz: (890.21µs) 0 [Go-http-client/1.1 127.0.0.1:34388]
I1203 13:17:19.879622  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.879707  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.879787  106607 httplog.go:90] GET /healthz: (1.249658ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.887033  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.095735ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.887250  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1203 13:17:19.906791  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.448913ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.927559  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.532005ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.927820  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1203 13:17:19.946143  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.163346ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.967357  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.342778ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:19.967684  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1203 13:17:19.967935  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.967959  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.967991  106607 httplog.go:90] GET /healthz: (1.997916ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I1203 13:17:19.979197  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:19.979224  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:19.979250  106607 httplog.go:90] GET /healthz: (698.954µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:19.986328  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.338236ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.009349  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.355013ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.009675  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1203 13:17:20.026512  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.534709ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.047097  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.118559ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.047879  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1203 13:17:20.067132  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (2.068964ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.067789  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.067824  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.067863  106607 httplog.go:90] GET /healthz: (893.465µs) 0 [Go-http-client/1.1 127.0.0.1:34388]
I1203 13:17:20.079793  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.079850  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.079890  106607 httplog.go:90] GET /healthz: (1.321822ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.088032  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.050498ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.088262  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1203 13:17:20.106158  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.151644ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.127010  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.028025ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.127252  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1203 13:17:20.146297  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.3227ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.167183  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.199015ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.167337  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.167360  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.167379  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1203 13:17:20.167390  106607 httplog.go:90] GET /healthz: (1.611943ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I1203 13:17:20.179619  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.179667  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.179720  106607 httplog.go:90] GET /healthz: (1.186055ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.186213  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.244952ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.207405  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.442592ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.207791  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1203 13:17:20.226231  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.266752ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.247239  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.221235ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.247480  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1203 13:17:20.265798  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (882.561µs) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.266816  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.266846  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.266926  106607 httplog.go:90] GET /healthz: (776.188µs) 0 [Go-http-client/1.1 127.0.0.1:34102]
I1203 13:17:20.279352  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.279390  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.279430  106607 httplog.go:90] GET /healthz: (886.997µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.286845  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.899092ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.287155  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1203 13:17:20.306267  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.163571ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.327002  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.993337ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.327293  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1203 13:17:20.346356  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.275483ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.366810  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.845582ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.367125  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1203 13:17:20.367127  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.367164  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.367192  106607 httplog.go:90] GET /healthz: (1.363934ms) 0 [Go-http-client/1.1 127.0.0.1:34388]
I1203 13:17:20.379472  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.379509  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.379553  106607 httplog.go:90] GET /healthz: (971.75µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.386067  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.143025ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.408060  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.856706ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.408297  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1203 13:17:20.426776  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.822238ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.446504  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.655403ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.446743  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1203 13:17:20.466420  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.059834ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.466895  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.466916  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.466941  106607 httplog.go:90] GET /healthz: (1.094316ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I1203 13:17:20.480214  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.480251  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.480300  106607 httplog.go:90] GET /healthz: (1.637893ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.486742  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.831051ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.487039  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1203 13:17:20.506365  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.394107ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.527086  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.1189ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.527330  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1203 13:17:20.546363  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.336819ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.567083  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.567132  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.567202  106607 httplog.go:90] GET /healthz: (1.247099ms) 0 [Go-http-client/1.1 127.0.0.1:34388]
I1203 13:17:20.568044  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.918479ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.568246  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1203 13:17:20.579605  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.579697  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.579758  106607 httplog.go:90] GET /healthz: (1.119102ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.586387  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.322633ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.607806  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.537502ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.608057  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1203 13:17:20.626414  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.423052ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.648398  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.341142ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.648780  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1203 13:17:20.666739  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.581512ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.667095  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.667138  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.667183  106607 httplog.go:90] GET /healthz: (1.027615ms) 0 [Go-http-client/1.1 127.0.0.1:34388]
I1203 13:17:20.680108  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.680147  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.680185  106607 httplog.go:90] GET /healthz: (1.482487ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.691416  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.719677ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.691784  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1203 13:17:20.708410  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (3.218229ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.727909  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.812485ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.728506  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1203 13:17:20.747268  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (2.118832ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.767803  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.681561ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.767831  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.767848  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.767878  106607 httplog.go:90] GET /healthz: (1.939606ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I1203 13:17:20.768069  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1203 13:17:20.779953  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.780004  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.780081  106607 httplog.go:90] GET /healthz: (1.375454ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.787200  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.247759ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.807301  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.303995ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.807691  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1203 13:17:20.826414  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.432261ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.848291  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.215801ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.848695  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1203 13:17:20.866385  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.286653ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.866620  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.866670  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.866702  106607 httplog.go:90] GET /healthz: (823.751µs) 0 [Go-http-client/1.1 127.0.0.1:34388]
I1203 13:17:20.879684  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.879777  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.879824  106607 httplog.go:90] GET /healthz: (1.20242ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.887026  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.979752ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.887421  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1203 13:17:20.906163  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.130622ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.927957  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.922877ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.928315  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1203 13:17:20.946687  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.507869ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.967287  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.320475ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:20.967566  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1203 13:17:20.968358  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.968398  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.968433  106607 httplog.go:90] GET /healthz: (2.340559ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I1203 13:17:20.979709  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:20.979762  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:20.979809  106607 httplog.go:90] GET /healthz: (921.069µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:20.986064  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.108426ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.007248  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.22732ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.007563  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1203 13:17:21.026319  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.304894ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.047101  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.093525ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.047431  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1203 13:17:21.066267  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.180531ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.067197  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.067221  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.067260  106607 httplog.go:90] GET /healthz: (1.066841ms) 0 [Go-http-client/1.1 127.0.0.1:34388]
I1203 13:17:21.079556  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.079591  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.079646  106607 httplog.go:90] GET /healthz: (972.932µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.087134  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.971355ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.087454  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1203 13:17:21.106356  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.217517ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.127111  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.057735ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.127563  106607 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1203 13:17:21.146339  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.136752ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.148229  106607 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.304949ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.167691  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.167871  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.168009  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.929035ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.168138  106607 httplog.go:90] GET /healthz: (2.276334ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I1203 13:17:21.168668  106607 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1203 13:17:21.181584  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.181623  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.181727  106607 httplog.go:90] GET /healthz: (3.037637ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.186106  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.153403ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.187825  106607 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.394434ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.207540  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.54785ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.207779  106607 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1203 13:17:21.226154  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.103643ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.227944  106607 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.228286ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.248258  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.227982ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.248489  106607 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1203 13:17:21.267175  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.267203  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.267239  106607 httplog.go:90] GET /healthz: (1.349414ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I1203 13:17:21.267745  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (2.619302ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.269432  106607 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.223154ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.280999  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.281026  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.281075  106607 httplog.go:90] GET /healthz: (2.558882ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.287997  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.015229ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.288504  106607 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1203 13:17:21.306428  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.36044ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.310263  106607 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.991778ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.328669  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.21924ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.328982  106607 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1203 13:17:21.347164  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (2.143291ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.349131  106607 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.347682ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.367588  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.554437ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.367872  106607 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1203 13:17:21.368391  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.368414  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.368461  106607 httplog.go:90] GET /healthz: (2.394259ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I1203 13:17:21.379687  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.379719  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.379760  106607 httplog.go:90] GET /healthz: (1.188058ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.386049  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.040401ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.388567  106607 httplog.go:90] GET /api/v1/namespaces/kube-public: (2.101889ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.407224  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.110652ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.408028  106607 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1203 13:17:21.426209  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.173097ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.427999  106607 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.27861ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.448369  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.334748ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.448591  106607 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1203 13:17:21.466042  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.058803ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.467594  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.467628  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.467698  106607 httplog.go:90] GET /healthz: (1.914737ms) 0 [Go-http-client/1.1 127.0.0.1:34388]
I1203 13:17:21.467843  106607 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.439599ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.479819  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.479851  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.479888  106607 httplog.go:90] GET /healthz: (1.136513ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.487189  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.191629ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.487524  106607 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1203 13:17:21.507018  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.492115ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.508918  106607 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.154691ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.530337  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.837271ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.530720  106607 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1203 13:17:21.546265  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.158287ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.551892  106607 httplog.go:90] GET /api/v1/namespaces/kube-system: (5.012606ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.567345  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.261508ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.567601  106607 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1203 13:17:21.567691  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.567710  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.567744  106607 httplog.go:90] GET /healthz: (1.842063ms) 0 [Go-http-client/1.1 127.0.0.1:34388]
I1203 13:17:21.579428  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.579457  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.579514  106607 httplog.go:90] GET /healthz: (955.285µs) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.585999  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.060353ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.587443  106607 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.105604ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.610755  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.652027ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.611095  106607 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1203 13:17:21.626919  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.70358ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.628836  106607 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.416896ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.647005  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.00245ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.647432  106607 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1203 13:17:21.666928  106607 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.199563ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.669461  106607 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.502423ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34388]
I1203 13:17:21.669922  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.669953  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.669984  106607 httplog.go:90] GET /healthz: (4.08478ms) 0 [Go-http-client/1.1 127.0.0.1:34102]
I1203 13:17:21.679872  106607 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1203 13:17:21.679900  106607 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1203 13:17:21.679966  106607 httplog.go:90] GET /healthz: (1.039032ms) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.687546  106607 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.400379ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.687926  106607 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1203 13:17:21.767216  106607 httplog.go:90] GET /healthz: (1.218939ms) 200 [Go-http-client/1.1 127.0.0.1:34102]
W1203 13:17:21.768734  106607 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1203 13:17:21.768774  106607 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1203 13:17:21.768791  106607 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1203 13:17:21.771699  106607 httplog.go:90] POST /apis/apps/v1/namespaces/test-deployment-available-condition/deployments: (2.36731ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.772159  106607 reflector.go:153] Starting reflector *v1.Deployment (12h0m0s) from k8s.io/client-go/informers/factory.go:135
I1203 13:17:21.772177  106607 reflector.go:188] Listing and watching *v1.Deployment from k8s.io/client-go/informers/factory.go:135
I1203 13:17:21.772396  106607 reflector.go:153] Starting reflector *v1.ReplicaSet (12h0m0s) from k8s.io/client-go/informers/factory.go:135
I1203 13:17:21.772410  106607 reflector.go:188] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:135
I1203 13:17:21.772720  106607 reflector.go:153] Starting reflector *v1.Pod (12h0m0s) from k8s.io/client-go/informers/factory.go:135
I1203 13:17:21.772735  106607 reflector.go:188] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I1203 13:17:21.772860  106607 httplog.go:90] GET /apis/apps/v1/deployments?limit=500&resourceVersion=0: (355.406µs) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:34388]
I1203 13:17:21.772920  106607 replica_set.go:180] Starting replicaset controller
I1203 13:17:21.772930  106607 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
I1203 13:17:21.772952  106607 deployment_controller.go:152] Starting deployment controller
I1203 13:17:21.772956  106607 shared_informer.go:197] Waiting for caches to sync for deployment
I1203 13:17:21.773309  106607 deployment_controller.go:168] Adding deployment deployment
I1203 13:17:21.773513  106607 get.go:251] Starting watch for /apis/apps/v1/deployments, rv=22289 labels= fields= timeout=5m38s
I1203 13:17:21.773550  106607 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (293.936µs) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:34388]
I1203 13:17:21.773712  106607 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (437.219µs) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:34892]
I1203 13:17:21.774128  106607 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=21793 labels= fields= timeout=5m20s
I1203 13:17:21.774267  106607 get.go:251] Starting watch for /api/v1/pods, rv=21788 labels= fields= timeout=6m29s
I1203 13:17:21.774767  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.641476ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.779749  106607 httplog.go:90] GET /healthz: (1.113536ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.781143  106607 httplog.go:90] GET /api/v1/namespaces/default: (1.017318ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.783479  106607 httplog.go:90] POST /api/v1/namespaces: (1.855114ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.785794  106607 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.84898ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.791315  106607 httplog.go:90] POST /api/v1/namespaces/default/services: (5.068561ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.792801  106607 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.011136ms) 404 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.795127  106607 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (1.915884ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34102]
I1203 13:17:21.873183  106607 shared_informer.go:227] caches populated
I1203 13:17:21.873217  106607 shared_informer.go:204] Caches are synced for deployment 
I1203 13:17:21.873198  106607 shared_informer.go:227] caches populated
I1203 13:17:21.873257  106607 shared_informer.go:204] Caches are synced for ReplicaSet 
I1203 13:17:21.873289  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:21.873281901 +0000 UTC m=+160.462342321)
I1203 13:17:21.873594  106607 deployment_util.go:259] Updating replica set "deployment-545c594b9d" revision to 1
I1203 13:17:21.878469  106607 httplog.go:90] POST /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets: (4.373802ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34102]
I1203 13:17:21.878823  106607 replica_set.go:288] Adding ReplicaSet test-deployment-available-condition/deployment-545c594b9d
I1203 13:17:21.878878  106607 controller_utils.go:202] Controller test-deployment-available-condition/deployment-545c594b9d either never recorded expectations, or the ttl expired.
I1203 13:17:21.878900  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (3.277343ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34922]
I1203 13:17:21.878901  106607 controller_utils.go:219] Setting expectations &controller.ControlleeExpectations{add:10, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:21.878956  106607 replica_set.go:561] Too few replicas for ReplicaSet test-deployment-available-condition/deployment-545c594b9d, need 10, creating 10
I1203 13:17:21.879110  106607 deployment_controller.go:214] ReplicaSet deployment-545c594b9d added.
I1203 13:17:21.879960  106607 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"test-deployment-available-condition", Name:"deployment", UID:"d55cfa44-a080-4a0b-8762-8e6b4c2aa113", APIVersion:"apps/v1", ResourceVersion:"22289", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set deployment-545c594b9d to 10
I1203 13:17:21.881847  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.094481ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34922]
I1203 13:17:21.882077  106607 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-12-03 13:17:21.879412613 +0000 UTC m=+160.468473036 - now: 2019-12-03 13:17:21.882070761 +0000 UTC m=+160.471131177]
I1203 13:17:21.882210  106607 deployment_controller.go:175] Updating deployment deployment
I1203 13:17:21.882688  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.410776ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34924]
I1203 13:17:21.883205  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (3.96181ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34102]
I1203 13:17:21.883448  106607 controller_utils.go:592] Controller deployment-545c594b9d created pod deployment-545c594b9d-vvqfg
I1203 13:17:21.883523  106607 replica_set.go:378] Pod deployment-545c594b9d-vvqfg created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-545c594b9d-vvqfg", GenerateName:"deployment-545c594b9d-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-vvqfg", UID:"b0acafff-33c8-439f-92c0-6b34c04a5d53", ResourceVersion:"22307", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63710975841, loc:(*time.Location)(0x7124bc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"545c594b9d"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", Controller:(*bool)(0xc0211be507), BlockOwnerDeletion:(*bool)(0xc0211be508)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0211be590), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc021d071a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0211be598), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I1203 13:17:21.883765  106607 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:9, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:21.883980  106607 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", APIVersion:"apps/v1", ResourceVersion:"22304", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-545c594b9d-vvqfg
I1203 13:17:21.885485  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (3.148244ms) 409 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34922]
I1203 13:17:21.886437  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (13.1504ms)
I1203 13:17:21.886457  106607 deployment_controller.go:484] Error syncing deployment test-deployment-available-condition/deployment: Operation cannot be fulfilled on deployments.apps "deployment": the object has been modified; please apply your changes to the latest version and try again
I1203 13:17:21.886494  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:21.886489304 +0000 UTC m=+160.475549714)
I1203 13:17:21.886881  106607 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-12-03 13:17:21 +0000 UTC - now: 2019-12-03 13:17:21.886876065 +0000 UTC m=+160.475936475]
I1203 13:17:21.888591  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (4.696606ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34102]
I1203 13:17:21.888625  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (4.258221ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34926]
I1203 13:17:21.888959  106607 controller_utils.go:592] Controller deployment-545c594b9d created pod deployment-545c594b9d-d24wz
I1203 13:17:21.888625  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (4.778684ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:21.889024  106607 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", APIVersion:"apps/v1", ResourceVersion:"22304", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-545c594b9d-d24wz
I1203 13:17:21.889220  106607 controller_utils.go:592] Controller deployment-545c594b9d created pod deployment-545c594b9d-b8rc9
I1203 13:17:21.889517  106607 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", APIVersion:"apps/v1", ResourceVersion:"22304", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-545c594b9d-b8rc9
I1203 13:17:21.889554  106607 replica_set.go:378] Pod deployment-545c594b9d-d24wz created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-545c594b9d-d24wz", GenerateName:"deployment-545c594b9d-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-d24wz", UID:"d378fcbb-6017-44d6-8fcc-faefd7984053", ResourceVersion:"22310", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63710975841, loc:(*time.Location)(0x7124bc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"545c594b9d"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", Controller:(*bool)(0xc02135e1e7), BlockOwnerDeletion:(*bool)(0xc02135e1e8)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc02135e270), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc021ce5800), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc02135e278), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I1203 13:17:21.889964  106607 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:8, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:21.890001  106607 replica_set.go:378] Pod deployment-545c594b9d-b8rc9 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-545c594b9d-b8rc9", GenerateName:"deployment-545c594b9d-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-b8rc9", UID:"5bfd0b8e-a243-4d3e-82cf-82b35af35a87", ResourceVersion:"22311", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63710975841, loc:(*time.Location)(0x7124bc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"545c594b9d"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", Controller:(*bool)(0xc02135e4d7), BlockOwnerDeletion:(*bool)(0xc02135e4d8)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc02135e560), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc021ce5860), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc02135e568), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I1203 13:17:21.890170  106607 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:7, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:21.891412  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (4.188143ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34922]
I1203 13:17:21.891738  106607 deployment_controller.go:175] Updating deployment deployment
I1203 13:17:21.891767  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (5.272771ms)
I1203 13:17:21.891802  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:21.891798458 +0000 UTC m=+160.480858858)
I1203 13:17:21.892105  106607 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-12-03 13:17:21 +0000 UTC - now: 2019-12-03 13:17:21.892099106 +0000 UTC m=+160.481159527]
I1203 13:17:21.892148  106607 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I1203 13:17:21.892171  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (370.219µs)
I1203 13:17:21.892807  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (3.455448ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34926]
I1203 13:17:21.893252  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (3.656434ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34102]
I1203 13:17:21.893303  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (3.323619ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34930]
I1203 13:17:21.893319  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (3.595198ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:21.893549  106607 controller_utils.go:592] Controller deployment-545c594b9d created pod deployment-545c594b9d-2nr84
I1203 13:17:21.893579  106607 controller_utils.go:592] Controller deployment-545c594b9d created pod deployment-545c594b9d-h5vkd
I1203 13:17:21.893587  106607 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", APIVersion:"apps/v1", ResourceVersion:"22304", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-545c594b9d-2nr84
I1203 13:17:21.893614  106607 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", APIVersion:"apps/v1", ResourceVersion:"22304", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-545c594b9d-h5vkd
I1203 13:17:21.893811  106607 controller_utils.go:592] Controller deployment-545c594b9d created pod deployment-545c594b9d-4bp5s
I1203 13:17:21.893848  106607 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", APIVersion:"apps/v1", ResourceVersion:"22304", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-545c594b9d-4bp5s
I1203 13:17:21.893817  106607 cacher.go:782] cacher (*core.Pod): 1 objects queued in incoming channel.
I1203 13:17:21.894227  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (4.248427ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34932]
I1203 13:17:21.894508  106607 replica_set.go:378] Pod deployment-545c594b9d-tj498 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-545c594b9d-tj498", GenerateName:"deployment-545c594b9d-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-tj498", UID:"ba1e698d-453b-4a83-9474-c838ed9f7ed2", ResourceVersion:"22314", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63710975841, loc:(*time.Location)(0x7124bc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"545c594b9d"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", Controller:(*bool)(0xc021404077), BlockOwnerDeletion:(*bool)(0xc021404078)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc021404100), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc021d5bbc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc021404108), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I1203 13:17:21.894675  106607 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:6, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:21.894746  106607 replica_set.go:378] Pod deployment-545c594b9d-2nr84 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-545c594b9d-2nr84", GenerateName:"deployment-545c594b9d-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-2nr84", UID:"6cda6f06-5a24-47e9-9ee1-04bbc66fc2a5", ResourceVersion:"22315", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63710975841, loc:(*time.Location)(0x7124bc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"545c594b9d"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", Controller:(*bool)(0xc021404367), BlockOwnerDeletion:(*bool)(0xc021404368)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0214043f0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc021d5bc20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0214043f8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I1203 13:17:21.894866  106607 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:5, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:21.894918  106607 replica_set.go:378] Pod deployment-545c594b9d-h5vkd created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-545c594b9d-h5vkd", GenerateName:"deployment-545c594b9d-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-h5vkd", UID:"7988884d-adcb-4e22-b49d-f433e6f7c126", ResourceVersion:"22316", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63710975841, loc:(*time.Location)(0x7124bc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"545c594b9d"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", Controller:(*bool)(0xc021404657), BlockOwnerDeletion:(*bool)(0xc021404658)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0214046e0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc021d5bc80), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0214046e8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I1203 13:17:21.895062  106607 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:4, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:21.895078  106607 replica_set.go:378] Pod deployment-545c594b9d-4bp5s created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-545c594b9d-4bp5s", GenerateName:"deployment-545c594b9d-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-4bp5s", UID:"19107ccc-91cd-4f2e-af3c-6df1156af5c5", ResourceVersion:"22317", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63710975841, loc:(*time.Location)(0x7124bc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"545c594b9d"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", Controller:(*bool)(0xc021404947), BlockOwnerDeletion:(*bool)(0xc021404948)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0214049d0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc021d5bce0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0214049d8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I1203 13:17:21.895173  106607 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:3, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:21.895896  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.636942ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34926]
I1203 13:17:21.896313  106607 controller_utils.go:592] Controller deployment-545c594b9d created pod deployment-545c594b9d-tj498
I1203 13:17:21.896460  106607 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", APIVersion:"apps/v1", ResourceVersion:"22304", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-545c594b9d-tj498
I1203 13:17:21.977412  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.835808ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.078306  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.352339ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.079124  106607 request.go:565] Throttling request took 182.866378ms, request: POST:http://127.0.0.1:42185/api/v1/namespaces/test-deployment-available-condition/events
I1203 13:17:22.088534  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (8.893215ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34922]
I1203 13:17:22.177316  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.790105ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34922]
I1203 13:17:22.279261  106607 request.go:565] Throttling request took 382.781891ms, request: POST:http://127.0.0.1:42185/api/v1/namespaces/test-deployment-available-condition/pods
I1203 13:17:22.280788  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (5.354333ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34922]
I1203 13:17:22.282091  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.525027ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:22.282407  106607 controller_utils.go:592] Controller deployment-545c594b9d created pod deployment-545c594b9d-b7fqf
I1203 13:17:22.282456  106607 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", APIVersion:"apps/v1", ResourceVersion:"22304", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-545c594b9d-b7fqf
I1203 13:17:22.282289  106607 replica_set.go:378] Pod deployment-545c594b9d-b7fqf created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-545c594b9d-b7fqf", GenerateName:"deployment-545c594b9d-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-b7fqf", UID:"94cebaa6-ba30-4cb6-9301-23297fffe128", ResourceVersion:"22347", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63710975842, loc:(*time.Location)(0x7124bc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"545c594b9d"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", Controller:(*bool)(0xc0215a24a7), BlockOwnerDeletion:(*bool)(0xc0215a24a8)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0215a2530), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc021f26480), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0215a2538), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I1203 13:17:22.282487  106607 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:2, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:22.377095  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.605534ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.478542  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (3.073419ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.479291  106607 request.go:565] Throttling request took 582.799034ms, request: POST:http://127.0.0.1:42185/api/v1/namespaces/test-deployment-available-condition/pods
I1203 13:17:22.481803  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.255095ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:22.482047  106607 controller_utils.go:592] Controller deployment-545c594b9d created pod deployment-545c594b9d-c774n
I1203 13:17:22.482094  106607 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", APIVersion:"apps/v1", ResourceVersion:"22304", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-545c594b9d-c774n
I1203 13:17:22.482110  106607 replica_set.go:378] Pod deployment-545c594b9d-c774n created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-545c594b9d-c774n", GenerateName:"deployment-545c594b9d-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-c774n", UID:"dd7b6699-301b-4ff4-95a8-38f159b1f59b", ResourceVersion:"22366", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63710975842, loc:(*time.Location)(0x7124bc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"545c594b9d"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", Controller:(*bool)(0xc0215a2d27), BlockOwnerDeletion:(*bool)(0xc0215a2d28)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0215a2db0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc021f267e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0215a2db8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I1203 13:17:22.482249  106607 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:1, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:22.577288  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.729668ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.678750  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (3.239238ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.679189  106607 request.go:565] Throttling request took 782.596819ms, request: POST:http://127.0.0.1:42185/api/v1/namespaces/test-deployment-available-condition/pods
I1203 13:17:22.681889  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/pods: (2.454354ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:22.682138  106607 controller_utils.go:592] Controller deployment-545c594b9d created pod deployment-545c594b9d-vrtt2
I1203 13:17:22.682207  106607 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-545c594b9d, replicas 0->0 (need 10), fullyLabeledReplicas 0->0, readyReplicas 0->0, availableReplicas 0->0, sequence No: 0->1
I1203 13:17:22.682229  106607 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"test-deployment-available-condition", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", APIVersion:"apps/v1", ResourceVersion:"22304", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: deployment-545c594b9d-vrtt2
I1203 13:17:22.682366  106607 replica_set.go:378] Pod deployment-545c594b9d-vrtt2 created: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"deployment-545c594b9d-vrtt2", GenerateName:"deployment-545c594b9d-", Namespace:"test-deployment-available-condition", SelfLink:"/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-vrtt2", UID:"53d6bf68-0313-4829-8f0c-e45bb6e1ac56", ResourceVersion:"22372", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63710975842, loc:(*time.Location)(0x7124bc0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"test", "pod-template-hash":"545c594b9d"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"apps/v1", Kind:"ReplicaSet", Name:"deployment-545c594b9d", UID:"a7e40980-fb57-4aea-ac4d-820c6b4b691f", Controller:(*bool)(0xc021648427), BlockOwnerDeletion:(*bool)(0xc021648428)}}, Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"fake-name", Image:"fakeimage", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"Always", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0216484b0), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc02201c120), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0216484b8), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:(*v1.Time)(nil), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}.
I1203 13:17:22.682458  106607 controller_utils.go:236] Lowered expectations &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:22.685569  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d/status: (3.025952ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:22.685924  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (807.04798ms)
I1203 13:17:22.685985  106607 deployment_controller.go:280] ReplicaSet deployment-545c594b9d updated.
I1203 13:17:22.686025  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:22.686007017 +0000 UTC m=+161.275067430)
I1203 13:17:22.686003  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:22.686132  106607 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-545c594b9d, replicas 0->10 (need 10), fullyLabeledReplicas 0->10, readyReplicas 0->0, availableReplicas 0->0, sequence No: 1->1
I1203 13:17:22.686418  106607 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-12-03 13:17:21 +0000 UTC - now: 2019-12-03 13:17:22.686412425 +0000 UTC m=+161.275472831]
I1203 13:17:22.686474  106607 progress.go:193] Queueing up deployment "deployment" for a progress check after 7198s
I1203 13:17:22.686489  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (480.571µs)
I1203 13:17:22.688946  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d/status: (2.533717ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:22.689031  106607 deployment_controller.go:280] ReplicaSet deployment-545c594b9d updated.
I1203 13:17:22.689084  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:22.68905991 +0000 UTC m=+161.278120333)
I1203 13:17:22.689171  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (3.170798ms)
I1203 13:17:22.689240  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:22.689319  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (84.85µs)
I1203 13:17:22.695218  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (5.326773ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34924]
I1203 13:17:22.695352  106607 deployment_controller.go:175] Updating deployment deployment
I1203 13:17:22.695619  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (6.554608ms)
I1203 13:17:22.695697  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:22.69569202 +0000 UTC m=+161.284752421)
I1203 13:17:22.696039  106607 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-12-03 13:17:22 +0000 UTC - now: 2019-12-03 13:17:22.696033089 +0000 UTC m=+161.285093506]
I1203 13:17:22.696078  106607 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I1203 13:17:22.696103  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (408.954µs)
I1203 13:17:22.777928  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.551221ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.779801  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.246644ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.781314  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.092277ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.784144  106607 httplog.go:90] GET /api/v1/namespaces/test-deployment-available-condition/pods?labelSelector=name%3Dtest: (2.351871ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.787104  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.144903ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.879231  106607 request.go:565] Throttling request took 790.28878ms, request: POST:http://127.0.0.1:42185/api/v1/namespaces/test-deployment-available-condition/events
I1203 13:17:22.883575  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (4.037082ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:22.969369  106607 request.go:565] Throttling request took 181.760764ms, request: GET:http://127.0.0.1:42185/apis/apps/v1/namespaces/test-deployment-available-condition/replicasets?labelSelector=name%3Dtest
I1203 13:17:22.972173  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets?labelSelector=name%3Dtest: (2.54619ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.974829  106607 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-2nr84/status: (1.965483ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.975314  106607 replica_set.go:441] Pod deployment-545c594b9d-2nr84 updated, objectMeta {Name:deployment-545c594b9d-2nr84 GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-2nr84 UID:6cda6f06-5a24-47e9-9ee1-04bbc66fc2a5 ResourceVersion:22315 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc021404367 BlockOwnerDeletion:0xc021404368}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-545c594b9d-2nr84 GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-2nr84 UID:6cda6f06-5a24-47e9-9ee1-04bbc66fc2a5 ResourceVersion:22434 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc0218b61b7 BlockOwnerDeletion:0xc0218b61b8}] Finalizers:[] ClusterName: ManagedFields:[]}.
I1203 13:17:22.975418  106607 replica_set.go:451] ReplicaSet "deployment-545c594b9d" will be enqueued after 3600s for availability check
I1203 13:17:22.975493  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:22.975627  106607 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-545c594b9d, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 0->1, availableReplicas 0->0, sequence No: 1->1
I1203 13:17:22.977924  106607 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-4bp5s/status: (2.637421ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.978021  106607 replica_set.go:441] Pod deployment-545c594b9d-4bp5s updated, objectMeta {Name:deployment-545c594b9d-4bp5s GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-4bp5s UID:19107ccc-91cd-4f2e-af3c-6df1156af5c5 ResourceVersion:22317 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc021404947 BlockOwnerDeletion:0xc021404948}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-545c594b9d-4bp5s GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-4bp5s UID:19107ccc-91cd-4f2e-af3c-6df1156af5c5 ResourceVersion:22435 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc021920717 BlockOwnerDeletion:0xc021920718}] Finalizers:[] ClusterName: ManagedFields:[]}.
I1203 13:17:22.978127  106607 replica_set.go:451] ReplicaSet "deployment-545c594b9d" will be enqueued after 3600s for availability check
I1203 13:17:22.980741  106607 deployment_controller.go:280] ReplicaSet deployment-545c594b9d updated.
I1203 13:17:22.980777  106607 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-b7fqf/status: (2.456376ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.980794  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:22.980771565 +0000 UTC m=+161.569831985)
I1203 13:17:22.981125  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d/status: (5.250678ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34922]
I1203 13:17:22.981367  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (5.878459ms)
I1203 13:17:22.981398  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:22.981393  106607 replica_set.go:441] Pod deployment-545c594b9d-b7fqf updated, objectMeta {Name:deployment-545c594b9d-b7fqf GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-b7fqf UID:94cebaa6-ba30-4cb6-9301-23297fffe128 ResourceVersion:22347 Generation:0 CreationTimestamp:2019-12-03 13:17:22 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc0215a24a7 BlockOwnerDeletion:0xc0215a24a8}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-545c594b9d-b7fqf GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-b7fqf UID:94cebaa6-ba30-4cb6-9301-23297fffe128 ResourceVersion:22438 Generation:0 CreationTimestamp:2019-12-03 13:17:22 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc0217c8727 BlockOwnerDeletion:0xc0217c8728}] Finalizers:[] ClusterName: ManagedFields:[]}.
I1203 13:17:22.981480  106607 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-545c594b9d, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 1->3, availableReplicas 0->0, sequence No: 1->1
I1203 13:17:22.981491  106607 replica_set.go:451] ReplicaSet "deployment-545c594b9d" will be enqueued after 3600s for availability check
I1203 13:17:22.983983  106607 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-b8rc9/status: (2.4136ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.983934  106607 replica_set.go:441] Pod deployment-545c594b9d-b8rc9 updated, objectMeta {Name:deployment-545c594b9d-b8rc9 GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-b8rc9 UID:5bfd0b8e-a243-4d3e-82cf-82b35af35a87 ResourceVersion:22311 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc02135e4d7 BlockOwnerDeletion:0xc02135e4d8}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-545c594b9d-b8rc9 GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-b8rc9 UID:5bfd0b8e-a243-4d3e-82cf-82b35af35a87 ResourceVersion:22439 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc0219964c7 BlockOwnerDeletion:0xc0219964c8}] Finalizers:[] ClusterName: ManagedFields:[]}.
I1203 13:17:22.984025  106607 replica_set.go:451] ReplicaSet "deployment-545c594b9d" will be enqueued after 3600s for availability check
I1203 13:17:22.984396  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d/status: (2.461484ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:35224]
I1203 13:17:22.984532  106607 deployment_controller.go:280] ReplicaSet deployment-545c594b9d updated.
I1203 13:17:22.984626  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (3.228309ms)
I1203 13:17:22.984678  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:22.984764  106607 deployment_controller.go:175] Updating deployment deployment
I1203 13:17:22.984771  106607 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-545c594b9d, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 3->4, availableReplicas 0->0, sequence No: 1->1
I1203 13:17:22.987193  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (5.636009ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34922]
I1203 13:17:22.987814  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (7.038151ms)
I1203 13:17:22.987857  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:22.987851702 +0000 UTC m=+161.576912119)
I1203 13:17:22.989465  106607 deployment_controller.go:280] ReplicaSet deployment-545c594b9d updated.
I1203 13:17:22.989405  106607 replica_set.go:441] Pod deployment-545c594b9d-c774n updated, objectMeta {Name:deployment-545c594b9d-c774n GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-c774n UID:dd7b6699-301b-4ff4-95a8-38f159b1f59b ResourceVersion:22366 Generation:0 CreationTimestamp:2019-12-03 13:17:22 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc0215a2d27 BlockOwnerDeletion:0xc0215a2d28}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-545c594b9d-c774n GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-c774n UID:dd7b6699-301b-4ff4-95a8-38f159b1f59b ResourceVersion:22443 Generation:0 CreationTimestamp:2019-12-03 13:17:22 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc0214052d7 BlockOwnerDeletion:0xc0214052d8}] Finalizers:[] ClusterName: ManagedFields:[]}.
I1203 13:17:22.989501  106607 replica_set.go:451] ReplicaSet "deployment-545c594b9d" will be enqueued after 3600s for availability check
I1203 13:17:22.990580  106607 deployment_controller.go:175] Updating deployment deployment
I1203 13:17:22.990583  106607 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-c774n/status: (6.199251ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:22.990583  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d/status: (5.343578ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:35224]
I1203 13:17:22.990881  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (6.205685ms)
I1203 13:17:22.990949  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:22.990903  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.43014ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34922]
I1203 13:17:22.991054  106607 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-545c594b9d, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 4->5, availableReplicas 0->0, sequence No: 1->1
I1203 13:17:22.991329  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (3.471298ms)
I1203 13:17:22.991356  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:22.991349903 +0000 UTC m=+161.580410304)
I1203 13:17:22.993021  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d/status: (1.773484ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:22.993251  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (2.305626ms)
I1203 13:17:22.993345  106607 deployment_controller.go:280] ReplicaSet deployment-545c594b9d updated.
I1203 13:17:22.993347  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:22.993449  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (108.277µs)
I1203 13:17:22.997089  106607 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-d24wz/status: (5.869474ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35224]
I1203 13:17:22.997440  106607 replica_set.go:441] Pod deployment-545c594b9d-d24wz updated, objectMeta {Name:deployment-545c594b9d-d24wz GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-d24wz UID:d378fcbb-6017-44d6-8fcc-faefd7984053 ResourceVersion:22310 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc02135e1e7 BlockOwnerDeletion:0xc02135e1e8}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-545c594b9d-d24wz GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-d24wz UID:d378fcbb-6017-44d6-8fcc-faefd7984053 ResourceVersion:22447 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc0217c9ce7 BlockOwnerDeletion:0xc0217c9ce8}] Finalizers:[] ClusterName: ManagedFields:[]}.
I1203 13:17:22.997555  106607 deployment_controller.go:175] Updating deployment deployment
I1203 13:17:22.997571  106607 replica_set.go:451] ReplicaSet "deployment-545c594b9d" will be enqueued after 3600s for availability check
I1203 13:17:22.997627  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:22.997788  106607 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-545c594b9d, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 5->6, availableReplicas 0->0, sequence No: 1->1
I1203 13:17:22.998257  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (6.32032ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34922]
I1203 13:17:22.998546  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (7.190828ms)
I1203 13:17:22.998584  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:22.998579286 +0000 UTC m=+161.587639702)
I1203 13:17:22.999498  106607 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-h5vkd/status: (1.914968ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35224]
I1203 13:17:23.000955  106607 deployment_controller.go:280] ReplicaSet deployment-545c594b9d updated.
I1203 13:17:23.001382  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d/status: (3.351101ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:23.001709  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (4.084894ms)
I1203 13:17:23.001744  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:23.001758  106607 replica_set.go:441] Pod deployment-545c594b9d-h5vkd updated, objectMeta {Name:deployment-545c594b9d-h5vkd GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-h5vkd UID:7988884d-adcb-4e22-b49d-f433e6f7c126 ResourceVersion:22316 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc021404657 BlockOwnerDeletion:0xc021404658}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-545c594b9d-h5vkd GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-h5vkd UID:7988884d-adcb-4e22-b49d-f433e6f7c126 ResourceVersion:22451 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc021bd1307 BlockOwnerDeletion:0xc021bd1308}] Finalizers:[] ClusterName: ManagedFields:[]}.
I1203 13:17:23.001849  106607 replica_set.go:451] ReplicaSet "deployment-545c594b9d" will be enqueued after 3600s for availability check
I1203 13:17:23.001831  106607 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-545c594b9d, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 6->7, availableReplicas 0->0, sequence No: 1->1
I1203 13:17:23.002853  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (3.421859ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34922]
I1203 13:17:23.003067  106607 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-tj498/status: (3.191755ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35224]
I1203 13:17:23.003408  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (4.824165ms)
I1203 13:17:23.003435  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:23.003431736 +0000 UTC m=+161.592492137)
I1203 13:17:23.003488  106607 replica_set.go:441] Pod deployment-545c594b9d-tj498 updated, objectMeta {Name:deployment-545c594b9d-tj498 GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-tj498 UID:ba1e698d-453b-4a83-9474-c838ed9f7ed2 ResourceVersion:22314 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc021404077 BlockOwnerDeletion:0xc021404078}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-545c594b9d-tj498 GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-tj498 UID:ba1e698d-453b-4a83-9474-c838ed9f7ed2 ResourceVersion:22454 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc021a07797 BlockOwnerDeletion:0xc021a07798}] Finalizers:[] ClusterName: ManagedFields:[]}.
I1203 13:17:23.003667  106607 replica_set.go:451] ReplicaSet "deployment-545c594b9d" will be enqueued after 3600s for availability check
I1203 13:17:23.004039  106607 deployment_controller.go:175] Updating deployment deployment
I1203 13:17:23.006944  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d/status: (4.271471ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:23.008399  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (6.65555ms)
I1203 13:17:23.008424  106607 deployment_controller.go:280] ReplicaSet deployment-545c594b9d updated.
I1203 13:17:23.008451  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:23.008573  106607 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-545c594b9d, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 7->8, availableReplicas 0->0, sequence No: 1->1
I1203 13:17:23.008736  106607 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-vrtt2/status: (4.627963ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:35224]
I1203 13:17:23.008857  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (4.875562ms) 409 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34922]
I1203 13:17:23.009131  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (5.69425ms)
I1203 13:17:23.009243  106607 deployment_controller.go:484] Error syncing deployment test-deployment-available-condition/deployment: Operation cannot be fulfilled on deployments.apps "deployment": the object has been modified; please apply your changes to the latest version and try again
I1203 13:17:23.009352  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:23.00934696 +0000 UTC m=+161.598407364)
I1203 13:17:23.009297  106607 replica_set.go:441] Pod deployment-545c594b9d-vrtt2 updated, objectMeta {Name:deployment-545c594b9d-vrtt2 GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-vrtt2 UID:53d6bf68-0313-4829-8f0c-e45bb6e1ac56 ResourceVersion:22372 Generation:0 CreationTimestamp:2019-12-03 13:17:22 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc021648427 BlockOwnerDeletion:0xc021648428}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-545c594b9d-vrtt2 GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-vrtt2 UID:53d6bf68-0313-4829-8f0c-e45bb6e1ac56 ResourceVersion:22456 Generation:0 CreationTimestamp:2019-12-03 13:17:22 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc021ca2327 BlockOwnerDeletion:0xc021ca2328}] Finalizers:[] ClusterName: ManagedFields:[]}.
I1203 13:17:23.009765  106607 replica_set.go:451] ReplicaSet "deployment-545c594b9d" will be enqueued after 3600s for availability check
I1203 13:17:23.011484  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d/status: (2.531686ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:23.011854  106607 deployment_controller.go:280] ReplicaSet deployment-545c594b9d updated.
I1203 13:17:23.011862  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (3.415178ms)
I1203 13:17:23.011902  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:23.011993  106607 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-545c594b9d, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 8->9, availableReplicas 0->0, sequence No: 1->1
I1203 13:17:23.014062  106607 httplog.go:90] PUT /api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-vvqfg/status: (4.144333ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34922]
I1203 13:17:23.014286  106607 replica_set.go:441] Pod deployment-545c594b9d-vvqfg updated, objectMeta {Name:deployment-545c594b9d-vvqfg GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-vvqfg UID:b0acafff-33c8-439f-92c0-6b34c04a5d53 ResourceVersion:22307 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc0211be507 BlockOwnerDeletion:0xc0211be508}] Finalizers:[] ClusterName: ManagedFields:[]} -> {Name:deployment-545c594b9d-vvqfg GenerateName:deployment-545c594b9d- Namespace:test-deployment-available-condition SelfLink:/api/v1/namespaces/test-deployment-available-condition/pods/deployment-545c594b9d-vvqfg UID:b0acafff-33c8-439f-92c0-6b34c04a5d53 ResourceVersion:22459 Generation:0 CreationTimestamp:2019-12-03 13:17:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[name:test pod-template-hash:545c594b9d] Annotations:map[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:deployment-545c594b9d UID:a7e40980-fb57-4aea-ac4d-820c6b4b691f Controller:0xc021cbcf07 BlockOwnerDeletion:0xc021cbcf08}] Finalizers:[] ClusterName: ManagedFields:[]}.
I1203 13:17:23.014383  106607 replica_set.go:451] ReplicaSet "deployment-545c594b9d" will be enqueued after 3600s for availability check
I1203 13:17:23.015483  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d/status: (3.126966ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:23.015794  106607 deployment_controller.go:280] ReplicaSet deployment-545c594b9d updated.
I1203 13:17:23.015795  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (3.901509ms)
I1203 13:17:23.015837  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:23.015915  106607 deployment_controller.go:175] Updating deployment deployment
I1203 13:17:23.015941  106607 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-545c594b9d, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 9->10, availableReplicas 0->0, sequence No: 1->1
I1203 13:17:23.016464  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (5.344252ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:35224]
I1203 13:17:23.016713  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (7.36039ms)
I1203 13:17:23.016759  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:23.016754584 +0000 UTC m=+161.605815008)
I1203 13:17:23.018674  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d/status: (2.456585ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:23.019193  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (3.359622ms)
I1203 13:17:23.019505  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:23.019743  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (243.307µs)
I1203 13:17:23.019800  106607 deployment_controller.go:280] ReplicaSet deployment-545c594b9d updated.
I1203 13:17:23.021161  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (3.810671ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:35224]
I1203 13:17:23.021204  106607 deployment_controller.go:175] Updating deployment deployment
I1203 13:17:23.021668  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (4.906837ms)
I1203 13:17:23.021728  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:23.021722667 +0000 UTC m=+161.610783085)
I1203 13:17:23.025109  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.798257ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34924]
I1203 13:17:23.025218  106607 deployment_controller.go:175] Updating deployment deployment
I1203 13:17:23.025492  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (3.762878ms)
I1203 13:17:23.025531  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:23.025528015 +0000 UTC m=+161.614588416)
I1203 13:17:23.025920  106607 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-12-03 13:17:23 +0000 UTC - now: 2019-12-03 13:17:23.025912661 +0000 UTC m=+161.614973063]
I1203 13:17:23.025972  106607 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I1203 13:17:23.025988  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (457.845µs)
I1203 13:17:23.079267  106607 request.go:565] Throttling request took 195.217531ms, request: POST:http://127.0.0.1:42185/api/v1/namespaces/test-deployment-available-condition/events
I1203 13:17:23.082115  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.558669ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:23.169346  106607 request.go:565] Throttling request took 154.979799ms, request: GET:http://127.0.0.1:42185/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I1203 13:17:23.171549  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.826992ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:23.279209  106607 request.go:565] Throttling request took 196.554624ms, request: POST:http://127.0.0.1:42185/api/v1/namespaces/test-deployment-available-condition/events
I1203 13:17:23.282685  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (3.198137ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:23.369332  106607 request.go:565] Throttling request took 197.261511ms, request: GET:http://127.0.0.1:42185/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I1203 13:17:23.371448  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.80849ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:23.479237  106607 request.go:565] Throttling request took 196.121744ms, request: POST:http://127.0.0.1:42185/api/v1/namespaces/test-deployment-available-condition/events
I1203 13:17:23.482458  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.857044ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:23.569436  106607 request.go:565] Throttling request took 197.531242ms, request: GET:http://127.0.0.1:42185/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I1203 13:17:23.571390  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.608519ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:23.679197  106607 request.go:565] Throttling request took 196.141598ms, request: POST:http://127.0.0.1:42185/api/v1/namespaces/test-deployment-available-condition/events
I1203 13:17:23.681846  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.345676ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:23.769306  106607 request.go:565] Throttling request took 197.43643ms, request: GET:http://127.0.0.1:42185/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I1203 13:17:23.777401  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (7.684788ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:23.879235  106607 request.go:565] Throttling request took 196.999367ms, request: POST:http://127.0.0.1:42185/api/v1/namespaces/test-deployment-available-condition/events
I1203 13:17:23.881919  106607 httplog.go:90] POST /api/v1/namespaces/test-deployment-available-condition/events: (2.366849ms) 201 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34924]
I1203 13:17:23.969397  106607 request.go:565] Throttling request took 191.484545ms, request: PUT:http://127.0.0.1:42185/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I1203 13:17:23.972720  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.99408ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:23.973041  106607 deployment_controller.go:175] Updating deployment deployment
I1203 13:17:23.973086  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:23.973065862 +0000 UTC m=+162.562126277)
I1203 13:17:23.976003  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d: (2.383943ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34924]
I1203 13:17:23.976749  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:23.976900  106607 replica_set_utils.go:58] Updating status for : test-deployment-available-condition/deployment-545c594b9d, replicas 10->10 (need 10), fullyLabeledReplicas 10->10, readyReplicas 10->10, availableReplicas 0->8, sequence No: 1->2
I1203 13:17:23.977016  106607 deployment_controller.go:280] ReplicaSet deployment-545c594b9d updated.
I1203 13:17:23.979023  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d/status: (1.913033ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/replicaset-controller 127.0.0.1:34922]
I1203 13:17:23.979423  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (2.68045ms)
I1203 13:17:23.979529  106607 controller_utils.go:185] Controller expectations fulfilled &controller.ControlleeExpectations{add:0, del:0, key:"test-deployment-available-condition/deployment-545c594b9d", timestamp:time.Time{wall:0xbf71b5f87462ede6, ext:160467959084, loc:(*time.Location)(0x7124bc0)}}
I1203 13:17:23.979550  106607 deployment_controller.go:280] ReplicaSet deployment-545c594b9d updated.
I1203 13:17:23.979627  106607 replica_set.go:659] Finished syncing ReplicaSet "test-deployment-available-condition/deployment-545c594b9d" (103.475µs)
I1203 13:17:23.982155  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/replicasets/deployment-545c594b9d: (5.095981ms) 409 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34924]
I1203 13:17:23.982356  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (9.285515ms)
I1203 13:17:23.982381  106607 deployment_controller.go:484] Error syncing deployment test-deployment-available-condition/deployment: Operation cannot be fulfilled on replicasets.apps "deployment-545c594b9d": the object has been modified; please apply your changes to the latest version and try again
I1203 13:17:23.982409  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:23.98240549 +0000 UTC m=+162.571465905)
I1203 13:17:23.985034  106607 httplog.go:90] PUT /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment/status: (2.082374ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-controller 127.0.0.1:34924]
I1203 13:17:23.985317  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (2.905605ms)
I1203 13:17:23.985387  106607 deployment_controller.go:175] Updating deployment deployment
I1203 13:17:23.985424  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:23.98540163 +0000 UTC m=+162.574462051)
I1203 13:17:23.985830  106607 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-12-03 13:17:23 +0000 UTC - now: 2019-12-03 13:17:23.985819378 +0000 UTC m=+162.574879779]
I1203 13:17:23.985956  106607 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I1203 13:17:23.985973  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (568.813µs)
I1203 13:17:23.987556  106607 deployment_controller.go:564] Started syncing deployment "test-deployment-available-condition/deployment" (2019-12-03 13:17:23.987547301 +0000 UTC m=+162.576607716)
I1203 13:17:23.987882  106607 deployment_util.go:806] Deployment "deployment" timed out (false) [last progress check: 2019-12-03 13:17:23 +0000 UTC - now: 2019-12-03 13:17:23.987874976 +0000 UTC m=+162.576935390]
I1203 13:17:23.987929  106607 progress.go:193] Queueing up deployment "deployment" for a progress check after 7199s
I1203 13:17:23.987949  106607 deployment_controller.go:566] Finished syncing deployment "test-deployment-available-condition/deployment" (398.497µs)
I1203 13:17:24.169333  106607 request.go:565] Throttling request took 196.205082ms, request: GET:http://127.0.0.1:42185/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I1203 13:17:24.171330  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.643012ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:24.369327  106607 request.go:565] Throttling request took 197.519766ms, request: GET:http://127.0.0.1:42185/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I1203 13:17:24.371323  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (1.695637ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:24.569540  106607 request.go:565] Throttling request took 197.731743ms, request: GET:http://127.0.0.1:42185/apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment
I1203 13:17:24.571962  106607 httplog.go:90] GET /apis/apps/v1/namespaces/test-deployment-available-condition/deployments/deployment: (2.0128ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:24.572425  106607 controller.go:180] Shutting down kubernetes service endpoint reconciler
I1203 13:17:24.572521  106607 deployment_controller.go:164] Shutting down deployment controller
I1203 13:17:24.572611  106607 replica_set.go:192] Shutting down replicaset controller
I1203 13:17:24.572751  106607 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=21793&timeout=5m20s&timeoutSeconds=320&watch=true: (2.798744373s) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:34892]
I1203 13:17:24.572763  106607 httplog.go:90] GET /apis/apps/v1/deployments?allowWatchBookmarks=true&resourceVersion=22289&timeout=5m38s&timeoutSeconds=338&watch=true: (2.799411626s) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:34894]
I1203 13:17:24.572837  106607 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=21788&timeout=6m29s&timeoutSeconds=389&watch=true: (2.798752555s) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format/deployment-informers 127.0.0.1:34388]
I1203 13:17:24.574310  106607 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.564378ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:24.576947  106607 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (1.917212ms) 200 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34924]
I1203 13:17:24.577337  106607 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I1203 13:17:24.577418  106607 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=21788&timeout=8m38s&timeoutSeconds=518&watch=true: (6.108889864s) 0 [deployment.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:34104]
--- FAIL: TestDeploymentAvailableCondition (6.27s)
    deployment.go:268: Updating deployment deployment
    deployment_test.go:989: unexpected .replicas: expect 10, got 8

				from junit_304dbea7698c16157bb4586f231ea1f94495b046_20191203-131208.xml

Find deployment-545c594b9d-vvqfg mentions in log files | View test history on testgrid


Show 2899 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 56 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 155: bogus-expected-to-fail: command not found
!!! [1203 13:02:42] Call tree:
!!! [1203 13:02:42]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [1203 13:02:42]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [1203 13:02:42]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:131 juLog(...)
!!! [1203 13:02:42]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:159 record_command(...)
!!! [1203 13:02:42]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [1203 13:02:42] Running kubeadm tests
+++ [1203 13:02:47] Building go targets for linux/amd64:
    cmd/kubeadm
Running tests for APIVersion: v1,admissionregistration.k8s.io/v1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1,admission.k8s.io/v1beta1,apps/v1,apps/v1beta1,apps/v1beta2,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,autoscaling/v2beta2,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,coordination.k8s.io/v1,discovery.k8s.io/v1alpha1,discovery.k8s.io/v1beta1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,networking.k8s.io/v1beta1,node.k8s.io/v1alpha1,node.k8s.io/v1beta1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,scheduling.k8s.io/v1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,flowcontrol.apiserver.k8s.io/v1alpha1,
+++ [1203 13:03:29] Running tests without code coverage
{"Time":"2019-12-03T13:04:44.017498154Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t38.650s\n"}
... skipping 303 lines ...
+++ [1203 13:06:23] Building kube-controller-manager
+++ [1203 13:06:28] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [1203 13:06:55] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I1203 13:06:56.334114   54248 serving.go:312] Generated self-signed cert in-memory
W1203 13:06:56.815889   54248 authentication.go:409] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W1203 13:06:56.815950   54248 authentication.go:267] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W1203 13:06:56.815958   54248 authentication.go:291] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W1203 13:06:56.815977   54248 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W1203 13:06:56.815993   54248 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I1203 13:06:56.816020   54248 controllermanager.go:161] Version: v1.18.0-alpha.0.1373+28485db5d99a7f
I1203 13:06:56.817341   54248 secure_serving.go:178] Serving securely on [::]:10257
I1203 13:06:56.817409   54248 tlsconfig.go:219] Starting DynamicServingCertificateController
I1203 13:06:56.818188   54248 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I1203 13:06:56.818392   54248 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
... skipping 117 lines ...
I1203 13:06:57.851202   54248 shared_informer.go:197] Waiting for caches to sync for ReplicaSet
I1203 13:06:57.851001   54248 daemon_controller.go:255] Starting daemon sets controller
I1203 13:06:57.851233   54248 shared_informer.go:197] Waiting for caches to sync for daemon sets
I1203 13:06:57.851036   54248 job_controller.go:143] Starting job controller
I1203 13:06:57.851296   54248 shared_informer.go:197] Waiting for caches to sync for job
I1203 13:06:57.851077   54248 cleaner.go:81] Starting CSR cleaner controller
E1203 13:06:57.851164   54248 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1203 13:06:57.851359   54248 controllermanager.go:525] Skipping "service"
I1203 13:06:57.851384   54248 core.go:242] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W1203 13:06:57.851391   54248 controllermanager.go:525] Skipping "route"
I1203 13:06:57.851850   54248 controllermanager.go:533] Started "serviceaccount"
W1203 13:06:57.851881   54248 controllermanager.go:525] Skipping "nodeipam"
I1203 13:06:57.851897   54248 serviceaccounts_controller.go:116] Starting service account controller
... skipping 53 lines ...
I1203 13:06:57.873317   54248 shared_informer.go:197] Waiting for caches to sync for GC
I1203 13:06:57.873729   54248 controllermanager.go:533] Started "ttl"
W1203 13:06:57.873752   54248 controllermanager.go:512] "bootstrapsigner" is disabled
I1203 13:06:57.873889   54248 ttl_controller.go:116] Starting TTL controller
I1203 13:06:57.873902   54248 shared_informer.go:197] Waiting for caches to sync for TTL
I1203 13:06:57.874142   54248 node_lifecycle_controller.go:77] Sending events to api server
E1203 13:06:57.874186   54248 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided
W1203 13:06:57.874202   54248 controllermanager.go:525] Skipping "cloud-node-lifecycle"
W1203 13:06:57.905102   54248 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I1203 13:06:57.954997   54248 shared_informer.go:204] Caches are synced for ClusterRoleAggregator 
I1203 13:06:57.959195   54248 shared_informer.go:204] Caches are synced for PV protection 
E1203 13:06:57.966352   54248 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I1203 13:06:57.974096   54248 shared_informer.go:204] Caches are synced for TTL 
+++ [1203 13:06:58] Testing kubectl version: check client only output matches expected output
I1203 13:06:58.071191   54248 shared_informer.go:204] Caches are synced for certificate-csrapproving 
I1203 13:06:58.172615   54248 shared_informer.go:204] Caches are synced for expand 
Successful: the flag '--client' shows correct client info
(BSuccessful: the flag '--client' correctly has no server version info
... skipping 82 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [1203 13:07:01] Creating namespace namespace-1575378421-504
namespace/namespace-1575378421-504 created
Context "test" modified.
+++ [1203 13:07:01] Testing RESTMapper
+++ [1203 13:07:02] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 601 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 12 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:245: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 188 lines ...
(Bpod/valid-pod patched
core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
(Bpod/valid-pod patched
core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
(Bpod/valid-pod patched
core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [1203 13:07:39] "kubectl patch with resourceVersion 523" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W1203 13:07:40.713096   54248 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
node/node-v1-test replaced
core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
(Bnode "node-v1-test" deleted
core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
(BEdit cancelled, no changes made.
... skipping 22 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 85 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [1203 13:07:50] Creating namespace namespace-1575378470-965
namespace/namespace-1575378470-965 created
Context "test" modified.
+++ [1203 13:07:50] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [1203 13:07:51] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 17 lines ...
(Bpod "test-pod" deleted
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I1203 13:07:53.672603   50788 client.go:361] parsed scheme: "endpoint"
I1203 13:07:53.672666   50788 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1203 13:07:53.676291   50788 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj serverside-applied (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 102 lines ...
Context "test" modified.
+++ [1203 13:07:56] Testing kubectl create filter
create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 29 lines ...
I1203 13:07:59.584778   54248 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1575378477-24836", Name:"nginx", UID:"819a2ef3-3985-43a1-bf4a-c56650fc8eef", APIVersion:"apps/v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
I1203 13:07:59.588201   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378477-24836", Name:"nginx-8484dd655", UID:"96f2392a-d39e-41a3-9455-9197bc12760c", APIVersion:"apps/v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-7z959
I1203 13:07:59.593337   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378477-24836", Name:"nginx-8484dd655", UID:"96f2392a-d39e-41a3-9455-9197bc12760c", APIVersion:"apps/v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-tprxm
I1203 13:07:59.593730   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378477-24836", Name:"nginx-8484dd655", UID:"96f2392a-d39e-41a3-9455-9197bc12760c", APIVersion:"apps/v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-vj9ph
apps.sh:148: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1575378477-24836\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1575378477-24836"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
I1203 13:08:04.996189   54248 horizontal.go:341] Horizontal Pod Autoscaler frontend has been deleted in namespace-1575378468-23790
E1203 13:08:08.160436   54248 replica_set.go:534] sync "namespace-1575378477-24836/nginx-8484dd655" failed with Operation cannot be fulfilled on replicasets.apps "nginx-8484dd655": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1575378477-24836/nginx-8484dd655, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 96f2392a-d39e-41a3-9455-9197bc12760c, UID in object meta: 
E1203 13:08:08.164780   54248 replica_set.go:534] sync "namespace-1575378477-24836/nginx-8484dd655" failed with replicasets.apps "nginx-8484dd655" not found
deployment.apps/nginx configured
I1203 13:08:09.136030   54248 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1575378477-24836", Name:"nginx", UID:"b50b230e-f9c2-4253-be39-e46a65c3fcca", APIVersion:"apps/v1", ResourceVersion:"654", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
I1203 13:08:09.141179   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378477-24836", Name:"nginx-668b6c7744", UID:"81d604a0-7d3d-4a28-a6ea-cdf5b126b51a", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-dwnnd
I1203 13:08:09.144156   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378477-24836", Name:"nginx-668b6c7744", UID:"81d604a0-7d3d-4a28-a6ea-cdf5b126b51a", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-wrj8g
I1203 13:08:09.146521   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378477-24836", Name:"nginx-668b6c7744", UID:"81d604a0-7d3d-4a28-a6ea-cdf5b126b51a", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-bgr5w
Successful
... skipping 141 lines ...
+++ [1203 13:08:16] Creating namespace namespace-1575378496-3442
namespace/namespace-1575378496-3442 created
Context "test" modified.
+++ [1203 13:08:16] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1575378496-3442 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1575378496-3442 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I1203 13:08:18.659600   64699 loader.go:375] Config loaded from file:  /tmp/tmp.IOosRKPwtE/.kube/config
I1203 13:08:18.661117   64699 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I1203 13:08:18.685212   64699 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I1203 13:08:18.687042   64699 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 479 lines ...
Successful
message:NAME    DATA   AGE
one     0      1s
three   0      0s
two     0      0s
STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
Successful
message:STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
+++ [1203 13:08:25] Creating namespace namespace-1575378505-19082
namespace/namespace-1575378505-19082 created
Context "test" modified.
get.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
... skipping 56 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-12-03T13:08:25Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1575378505-19082", "resourceVersion":"741", "selfLink":"/api/v1/namespaces/namespace-1575378505-19082/pods/valid-pod", "uid":"0c032a88-18e1-4612-ad73-6d201f524932"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-12-03T13:08:25Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1575378505-19082","resourceVersion":"741","selfLink":"/api/v1/namespaces/namespace-1575378505-19082/pods/valid-pod","uid":"0c032a88-18e1-4612-ad73-6d201f524932"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-12-03T13:08:25Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1575378505-19082 resourceVersion:741 selfLink:/api/v1/namespaces/namespace-1575378505-19082/pods/valid-pod uid:0c032a88-18e1-4612-ad73-6d201f524932] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:STATUS
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:valid-pod
Successful
message:pod/valid-pod
status/<unknown>
has not:STATUS
Successful
... skipping 45 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has not:STATUS
... skipping 42 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 35 lines ...
+++ command: run_kubectl_exec_pod_tests
+++ [1203 13:08:31] Creating namespace namespace-1575378511-27421
namespace/namespace-1575378511-27421 created
Context "test" modified.
+++ [1203 13:08:31] Testing kubectl exec POD COMMAND
Successful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 2 lines ...
+++ command: run_kubectl_exec_resource_name_tests
+++ [1203 13:08:32] Creating namespace namespace-1575378512-30989
namespace/namespace-1575378512-30989 created
Context "test" modified.
+++ [1203 13:08:32] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:error: the server doesn't have a resource type "foo"
has:error:
Successful
message:Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I1203 13:08:32.932327   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378512-30989", Name:"frontend", UID:"11f6a5bb-a636-4b39-b31f-04e945e61f97", APIVersion:"apps/v1", ResourceVersion:"797", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-v7gw8
I1203 13:08:32.936025   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378512-30989", Name:"frontend", UID:"11f6a5bb-a636-4b39-b31f-04e945e61f97", APIVersion:"apps/v1", ResourceVersion:"797", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bwsr6
I1203 13:08:32.936153   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378512-30989", Name:"frontend", UID:"11f6a5bb-a636-4b39-b31f-04e945e61f97", APIVersion:"apps/v1", ResourceVersion:"797", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-l2rx7
configmap/test-set-env-config created
Successful
message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
Successful
message:Error from server (BadRequest): pod frontend-bwsr6 does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod frontend-bwsr6 does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"4af14cda-de90-4a01-b5b4-a88960d2ab5f","resourceVersion":"819","creationTimestamp":"2019-12-03T13:08:34Z"}}
... skipping 2 lines ...
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"4af14cda-de90-4a01-b5b4-a88960d2ab5f","resourceVersion":"820","creationTimestamp":"2019-12-03T13:08:34Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"4af14cda-de90-4a01-b5b4-a88960d2ab5f"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
valid-pod   0/1     Pending   0          0s
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:Timeout exceeded while reading body
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 158 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [1203 13:08:44] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 193 lines ...
(Bcrd.sh:450: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
Recording: run_cmd_with_img_tests
... skipping 11 lines ...
I1203 13:09:13.394972   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378553-15600", Name:"test1-6cdffdb5b8", UID:"5b057e2c-eb6c-4be8-b31c-6e992205fa5d", APIVersion:"apps/v1", ResourceVersion:"994", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-6cdffdb5b8-h5csx
Successful
message:deployment.apps/test1 created
has:deployment.apps/test1 created
deployment.apps "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
+++ [1203 13:09:13] Testing recursive resources
+++ [1203 13:09:13] Creating namespace namespace-1575378553-9428
namespace/namespace-1575378553-9428 created
W1203 13:09:13.730420   50788 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E1203 13:09:13.731818   54248 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
W1203 13:09:13.821775   50788 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E1203 13:09:13.823062   54248 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW1203 13:09:13.936729   50788 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E1203 13:09:13.938004   54248 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W1203 13:09:14.045477   50788 cacher.go:162] Terminating all watchers from cacher *unstructured.Unstructured
E1203 13:09:14.046691   54248 reflector.go:320] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1203 13:09:14.732826   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:14.824321   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1203 13:09:14.939160   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Name:         busybox0
Namespace:    namespace-1575378553-9428
Priority:     0
Node:         <none>
Labels:       app=busybox0
... skipping 153 lines ...
QoS Class:        BestEffort
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E1203 13:09:15.048025   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox0 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
E1203 13:09:15.734226   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1203 13:09:15.826811   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:15.940716   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx created
I1203 13:09:15.992390   54248 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1575378553-9428", Name:"nginx", UID:"97841292-069b-4008-8dc6-f55c15d4cc1a", APIVersion:"apps/v1", ResourceVersion:"1021", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
I1203 13:09:15.996178   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378553-9428", Name:"nginx-f87d999f7", UID:"f62d135c-cf85-4a8c-86cf-fdebdd3a3e00", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-vd4r9
I1203 13:09:15.999387   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378553-9428", Name:"nginx-f87d999f7", UID:"f62d135c-cf85-4a8c-86cf-fdebdd3a3e00", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-4vq6w
I1203 13:09:16.000041   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378553-9428", Name:"nginx-f87d999f7", UID:"f62d135c-cf85-4a8c-86cf-fdebdd3a3e00", APIVersion:"apps/v1", ResourceVersion:"1022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-lhzv6
E1203 13:09:16.049340   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bkubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
generic-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
(BSuccessful
... skipping 38 lines ...
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:extensions/v1beta1
deployment.apps "nginx" deleted
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1203 13:09:16.735532   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E1203 13:09:16.828150   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1203 13:09:16.941876   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E1203 13:09:17.050667   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BI1203 13:09:17.441190   54248 namespace_controller.go:185] Namespace has been deleted non-native-resources
pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1203 13:09:17.736453   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:17.829867   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1203 13:09:17.942720   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:18.051948   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/busybox0 created
I1203 13:09:18.104346   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378553-9428", Name:"busybox0", UID:"c36e2f4c-0b62-42a0-9027-9a88f0db0071", APIVersion:"v1", ResourceVersion:"1053", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-p55cc
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1203 13:09:18.109648   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378553-9428", Name:"busybox1", UID:"a987439d-515c-4c57-b473-b4012dbd8704", APIVersion:"v1", ResourceVersion:"1055", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-s5zt5
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BE1203 13:09:18.737872   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
E1203 13:09:18.831167   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
E1203 13:09:18.943970   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1203 13:09:19.053338   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(BE1203 13:09:19.739239   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BE1203 13:09:19.832428   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:09:19.861479   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378553-9428", Name:"busybox0", UID:"c36e2f4c-0b62-42a0-9027-9a88f0db0071", APIVersion:"v1", ResourceVersion:"1073", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-gjp6d
I1203 13:09:19.870348   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378553-9428", Name:"busybox1", UID:"a987439d-515c-4c57-b473-b4012dbd8704", APIVersion:"v1", ResourceVersion:"1077", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-gl6bh
E1203 13:09:19.944843   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(BE1203 13:09:20.055777   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1203 13:09:20.596416   54248 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1575378553-9428", Name:"nginx1-deployment", UID:"c7f12bf8-3da1-4b30-9338-e3e494a7d15e", APIVersion:"apps/v1", ResourceVersion:"1095", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
I1203 13:09:20.599399   54248 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1575378553-9428", Name:"nginx0-deployment", UID:"cbdc8250-76a0-40f4-aaf4-d7207f9fe4ba", APIVersion:"apps/v1", ResourceVersion:"1096", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
I1203 13:09:20.600871   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378553-9428", Name:"nginx1-deployment-7bdbbfb5cf", UID:"bb73d75b-7769-4b86-903a-1f060021936f", APIVersion:"apps/v1", ResourceVersion:"1097", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-tmtxw
I1203 13:09:20.605415   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378553-9428", Name:"nginx0-deployment-57c6bff7f6", UID:"5d26ac40-edcf-49cd-8acc-cd8813dfc774", APIVersion:"apps/v1", ResourceVersion:"1098", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-svz92
I1203 13:09:20.605906   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378553-9428", Name:"nginx1-deployment-7bdbbfb5cf", UID:"bb73d75b-7769-4b86-903a-1f060021936f", APIVersion:"apps/v1", ResourceVersion:"1097", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-fbzvn
I1203 13:09:20.609213   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378553-9428", Name:"nginx0-deployment-57c6bff7f6", UID:"5d26ac40-edcf-49cd-8acc-cd8813dfc774", APIVersion:"apps/v1", ResourceVersion:"1098", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-d2jjx
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(BE1203 13:09:20.740572   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BE1203 13:09:20.833891   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:20.946162   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
E1203 13:09:21.057006   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
... skipping 9 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
E1203 13:09:21.741913   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:21.835307   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:21.947226   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:22.058521   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1203 13:09:22.743384   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:22.836565   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/busybox0 created
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1203 13:09:22.880037   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378553-9428", Name:"busybox0", UID:"1d08decb-453e-4ce6-bdff-10e735c358e3", APIVersion:"v1", ResourceVersion:"1145", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-rlnps
I1203 13:09:22.885892   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378553-9428", Name:"busybox1", UID:"34a38816-860e-4ade-8b54-ff8a8a55d13c", APIVersion:"v1", ResourceVersion:"1147", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-rz9fw
E1203 13:09:22.948537   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1203 13:09:23.059831   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:no rollbacker has been implemented for "ReplicationController"
Successful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
E1203 13:09:23.744795   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:23.838067   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:23.949791   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:24.061244   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [1203 13:09:24] Testing kubectl(v1:namespaces)
namespace/my-namespace created
core.sh:1314: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bnamespace "my-namespace" deleted
E1203 13:09:24.746216   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:24.839395   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:24.951026   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:25.062704   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:25.747619   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:25.840786   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:25.952170   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:26.064770   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:26.749002   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:26.842027   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:26.953323   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:27.066236   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:27.750514   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:27.843522   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:27.954417   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:28.067722   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:28.751910   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:28.844957   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:28.955780   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:29.069301   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:29.752794   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace condition met
E1203 13:09:29.846295   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
E1203 13:09:29.956772   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1323: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BE1203 13:09:30.070596   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1575378418-29401" deleted
namespace "namespace-1575378421-504" deleted
... skipping 26 lines ...
namespace "namespace-1575378515-4644" deleted
namespace "namespace-1575378516-14163" deleted
namespace "namespace-1575378518-22532" deleted
namespace "namespace-1575378520-5666" deleted
namespace "namespace-1575378553-15600" deleted
namespace "namespace-1575378553-9428" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1575378418-29401" deleted
... skipping 27 lines ...
namespace "namespace-1575378515-4644" deleted
namespace "namespace-1575378516-14163" deleted
namespace "namespace-1575378518-22532" deleted
namespace "namespace-1575378520-5666" deleted
namespace "namespace-1575378553-15600" deleted
namespace "namespace-1575378553-9428" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
core.sh:1335: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1339: Successful get namespaces/other {{.metadata.name}}: other
(BI1203 13:09:30.552307   54248 shared_informer.go:197] Waiting for caches to sync for resource quota
I1203 13:09:30.552358   54248 shared_informer.go:204] Caches are synced for resource quota 
core.sh:1343: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1203 13:09:30.754319   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/valid-pod created
E1203 13:09:30.847465   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1347: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BE1203 13:09:30.957772   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1349: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BI1203 13:09:31.057943   54248 shared_informer.go:197] Waiting for caches to sync for garbage collector
I1203 13:09:31.058013   54248 shared_informer.go:204] Caches are synced for garbage collector 
E1203 13:09:31.071918   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1356: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1360: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
E1203 13:09:31.755702   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:31.848953   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:31.959133   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:32.073280   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:32.757087   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:32.850741   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:32.962045   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:33.075227   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:09:33.559829   54248 horizontal.go:341] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1575378553-9428
I1203 13:09:33.563810   54248 horizontal.go:341] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1575378553-9428
E1203 13:09:33.759544   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:33.852675   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:33.964723   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:34.077276   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:34.761466   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:34.853780   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:34.966189   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:35.079414   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:35.763912   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:35.855669   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:35.969666   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:36.089843   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_secrets_test
Running command: run_secrets_test

+++ Running case: test-cmd.run_secrets_test 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_secrets_test
+++ [1203 13:09:36] Creating namespace namespace-1575378576-18698
E1203 13:09:36.765553   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1575378576-18698 created
E1203 13:09:36.856809   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
+++ [1203 13:09:36] Testing secrets
E1203 13:09:36.970859   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:09:36.997893   71137 loader.go:375] Config loaded from file:  /tmp/tmp.IOosRKPwtE/.kube/config
Successful
message:apiVersion: v1
data:
  key1: dmFsdWUx
kind: Secret
... skipping 25 lines ...
  key1: dmFsdWUx
kind: Secret
metadata:
  creationTimestamp: null
  name: test
has not:example.com
E1203 13:09:37.091170   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:725: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-secrets\" }}found{{end}}{{end}}:: :
(Bnamespace/test-secrets created
core.sh:729: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets
(Bcore.sh:733: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:737: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:738: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
(BE1203 13:09:37.766676   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
E1203 13:09:37.858019   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:748: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1203 13:09:37.972281   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret/test-secret created
E1203 13:09:38.092025   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:752: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:753: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
(Bsecret "test-secret" deleted
core.sh:763: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:766: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:767: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(BE1203 13:09:38.768199   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
E1203 13:09:38.859269   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret/test-secret created
E1203 13:09:38.973520   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:773: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:774: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(BE1203 13:09:39.093328   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret "test-secret" deleted
secret/secret-string-data created
core.sh:796: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:797: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:798: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
E1203 13:09:39.769432   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:807: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1203 13:09:39.860491   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:09:39.928516   54248 namespace_controller.go:185] Namespace has been deleted my-namespace
secret "test-secret" deleted
E1203 13:09:39.974754   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "test-secrets" deleted
E1203 13:09:40.094214   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:09:40.460420   54248 namespace_controller.go:185] Namespace has been deleted kube-node-lease
I1203 13:09:40.460467   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378436-12067
I1203 13:09:40.467123   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378418-29401
I1203 13:09:40.482526   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378436-28619
I1203 13:09:40.487790   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378441-991
I1203 13:09:40.492204   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378440-19271
I1203 13:09:40.497256   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378421-504
I1203 13:09:40.536252   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378426-29079
I1203 13:09:40.537178   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378441-11930
I1203 13:09:40.601704   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378433-27734
E1203 13:09:40.770675   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:09:40.849995   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378451-14985
I1203 13:09:40.858328   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378466-20883
I1203 13:09:40.858328   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378452-1734
E1203 13:09:40.861839   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:09:40.864960   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378463-19638
I1203 13:09:40.883074   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378467-16000
I1203 13:09:40.888176   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378470-965
I1203 13:09:40.888228   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378464-25165
I1203 13:09:40.907458   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378471-20704
I1203 13:09:40.930404   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378468-23790
E1203 13:09:40.975983   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:09:41.008131   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378473-29199
E1203 13:09:41.095685   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:09:41.235425   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378476-26988
I1203 13:09:41.255431   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378496-3442
I1203 13:09:41.255530   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378495-6502
I1203 13:09:41.262135   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378511-27421
I1203 13:09:41.265124   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378515-15051
I1203 13:09:41.289240   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378494-27190
... skipping 4 lines ...
I1203 13:09:41.550136   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378516-14163
I1203 13:09:41.567665   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378518-22532
I1203 13:09:41.569534   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378520-5666
I1203 13:09:41.590918   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378553-15600
I1203 13:09:41.634184   54248 namespace_controller.go:185] Namespace has been deleted other
I1203 13:09:41.655779   54248 namespace_controller.go:185] Namespace has been deleted namespace-1575378553-9428
E1203 13:09:41.772156   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:41.863260   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:41.977276   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:42.097132   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:42.773546   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:42.864435   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:42.978756   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:43.098538   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:43.775169   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:43.866198   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:43.980831   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:44.099864   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:44.776623   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:44.867511   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:44.982287   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:45.100696   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_configmap_tests
+++ [1203 13:09:45] Creating namespace namespace-1575378585-25438
namespace/namespace-1575378585-25438 created
Context "test" modified.
+++ [1203 13:09:45] Testing configmaps
configmap/test-configmap created
core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
(BE1203 13:09:45.777771   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap "test-configmap" deleted
E1203 13:09:45.868872   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
(Bnamespace/test-configmaps created
E1203 13:09:45.983500   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
(BE1203 13:09:46.102074   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
(Bconfigmap/test-configmap created
configmap/test-binary-configmap created
core.sh:48: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
(Bcore.sh:49: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
(BE1203 13:09:46.779288   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:46.870261   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap "test-configmap" deleted
E1203 13:09:46.984820   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap "test-binary-configmap" deleted
namespace "test-configmaps" deleted
E1203 13:09:47.103296   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:47.780853   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:47.871600   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:47.986336   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:48.104600   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:48.782370   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:48.872932   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:48.987737   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:49.105863   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:49.783738   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:49.874240   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:49.989030   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:50.107199   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:09:50.122034   54248 namespace_controller.go:185] Namespace has been deleted test-secrets
E1203 13:09:50.785349   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:50.875531   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:50.990324   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:51.108679   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:51.787023   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:51.876805   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:51.991724   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:52.109668   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_client_config_tests
Running command: run_client_config_tests

+++ Running case: test-cmd.run_client_config_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_client_config_tests
+++ [1203 13:09:52] Creating namespace namespace-1575378592-14657
namespace/namespace-1575378592-14657 created
Context "test" modified.
+++ [1203 13:09:52] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
E1203 13:09:52.788316   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
E1203 13:09:52.878514   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:52.993301   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
E1203 13:09:53.111153   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_service_accounts_tests
... skipping 3 lines ...
+++ [1203 13:09:53] Testing service accounts
core.sh:828: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-service-accounts\" }}found{{end}}{{end}}:: :
(Bnamespace/test-service-accounts created
core.sh:832: Successful get namespaces/test-service-accounts {{.metadata.name}}: test-service-accounts
(Bserviceaccount/test-service-account created
core.sh:838: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
(BE1203 13:09:53.789536   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
serviceaccount "test-service-account" deleted
E1203 13:09:53.879790   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "test-service-accounts" deleted
E1203 13:09:53.994834   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:54.112474   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:54.791104   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:54.881104   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:54.996194   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:55.113851   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:55.792338   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:55.882403   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:55.997709   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:56.115168   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:56.794063   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:56.883832   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:56.999047   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:57.116584   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:09:57.173773   54248 namespace_controller.go:185] Namespace has been deleted test-configmaps
E1203 13:09:57.795412   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:57.885386   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:58.000189   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:58.117932   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:58.797133   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:58.886954   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:09:59.001160   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_job_tests
Running command: run_job_tests

+++ Running case: test-cmd.run_job_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_job_tests
+++ [1203 13:09:59] Creating namespace namespace-1575378599-6738
E1203 13:09:59.119287   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1575378599-6738 created
Context "test" modified.
+++ [1203 13:09:59] Testing job
batch.sh:30: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-jobs\" }}found{{end}}{{end}}:: :
(Bnamespace/test-jobs created
batch.sh:34: Successful get namespaces/test-jobs {{.metadata.name}}: test-jobs
(Bkubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
cronjob.batch/pi created
batch.sh:39: Successful get cronjob/pi --namespace=test-jobs {{.metadata.name}}: pi
(BNAME   SCHEDULE       SUSPEND   ACTIVE   LAST SCHEDULE   AGE
pi     59 23 31 2 *   False     0        <none>          0s
E1203 13:09:59.798677   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Name:                          pi
Namespace:                     test-jobs
Labels:                        run=pi
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  run=pi
... skipping 13 lines ...
    Environment:     <none>
    Mounts:          <none>
  Volumes:           <none>
Last Schedule Time:  <unset>
Active Jobs:         <none>
Events:              <none>
E1203 13:09:59.888320   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:job.batch/test-job
has:job.batch/test-job
E1203 13:10:00.002500   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
batch.sh:48: Successful get jobs {{range.items}}{{.metadata.name}}{{end}}: 
(BE1203 13:10:00.120470   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:10:00.145279   54248 event.go:281] Event(v1.ObjectReference{Kind:"Job", Namespace:"test-jobs", Name:"test-job", UID:"f043575b-f79e-4e64-aed1-af74effa898a", APIVersion:"batch/v1", ResourceVersion:"1485", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-c9jlq
job.batch/test-job created
batch.sh:53: Successful get job/test-job --namespace=test-jobs {{.metadata.name}}: test-job
(BNAME       COMPLETIONS   DURATION   AGE
test-job   0/1           0s         0s
Name:           test-job
... skipping 4 lines ...
                run=pi
Annotations:    cronjob.kubernetes.io/instantiate: manual
Controlled By:  CronJob/pi
Parallelism:    1
Completions:    1
Start Time:     Tue, 03 Dec 2019 13:10:00 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=f043575b-f79e-4e64-aed1-af74effa898a
           job-name=test-job
           run=pi
  Containers:
   pi:
... skipping 15 lines ...
  Type    Reason            Age   From            Message
  ----    ------            ----  ----            -------
  Normal  SuccessfulCreate  0s    job-controller  Created pod: test-job-c9jlq
job.batch "test-job" deleted
cronjob.batch "pi" deleted
namespace "test-jobs" deleted
E1203 13:10:00.800083   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:00.889751   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:01.003819   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:01.121809   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:01.801369   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:01.891123   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:02.004965   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:02.123200   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:02.802685   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:02.892178   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:03.006133   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:03.124508   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:03.803976   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:03.893562   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:10:03.992482   54248 namespace_controller.go:185] Namespace has been deleted test-service-accounts
E1203 13:10:04.007503   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:04.125925   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:04.805521   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:04.894974   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:05.008673   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:05.127318   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
E1203 13:10:05.806934   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_create_job_tests
Running command: run_create_job_tests

+++ Running case: test-cmd.run_create_job_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_job_tests
+++ [1203 13:10:05] Creating namespace namespace-1575378605-26803
E1203 13:10:05.896306   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1575378605-26803 created
E1203 13:10:06.010244   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
E1203 13:10:06.128761   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:10:06.144842   54248 event.go:281] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1575378605-26803", Name:"test-job", UID:"0085435d-4be7-479a-aa77-9e587638e7b7", APIVersion:"batch/v1", ResourceVersion:"1507", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-xwk4m
job.batch/test-job created
create.sh:86: Successful get job test-job {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/nginx:test-cmd
(Bjob.batch "test-job" deleted
I1203 13:10:06.432925   54248 event.go:281] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1575378605-26803", Name:"test-job-pi", UID:"3c3fd5f4-3a75-4537-a98b-220517e1c7bd", APIVersion:"batch/v1", ResourceVersion:"1516", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-job-pi-mvd8j
job.batch/test-job-pi created
create.sh:92: Successful get job test-job-pi {{(index .spec.template.spec.containers 0).image}}: k8s.gcr.io/perl
(Bjob.batch "test-job-pi" deleted
kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
cronjob.batch/test-pi created
E1203 13:10:06.808383   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:10:06.831150   54248 event.go:281] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1575378605-26803", Name:"my-pi", UID:"dd38c321-2aab-4d0f-995f-537392043142", APIVersion:"batch/v1", ResourceVersion:"1524", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-pi-qfm5g
job.batch/my-pi created
E1203 13:10:06.897734   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:[perl -Mbignum=bpi -wle print bpi(10)]
has:perl -Mbignum=bpi -wle print bpi(10)
job.batch "my-pi" deleted
E1203 13:10:07.011479   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
cronjob.batch "test-pi" deleted
+++ exit code: 0
E1203 13:10:07.130037   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_pod_templates_tests
Running command: run_pod_templates_tests

+++ Running case: test-cmd.run_pod_templates_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_pod_templates_tests
... skipping 4 lines ...
core.sh:1421: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: 
(BI1203 13:10:07.610395   50788 controller.go:606] quota admission added evaluator for: podtemplates
podtemplate/nginx created
core.sh:1425: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BNAME    CONTAINERS   IMAGES   POD LABELS
nginx   nginx        nginx    name=nginx
E1203 13:10:07.809709   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:07.899222   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1433: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BE1203 13:10:08.012882   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
podtemplate "nginx" deleted
E1203 13:10:08.131348   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1437: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ exit code: 0
Recording: run_service_tests
Running command: run_service_tests

+++ Running case: test-cmd.run_service_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_service_tests
Context "test" modified.
+++ [1203 13:10:08] Testing kubectl(v1:services)
core.sh:858: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/redis-master created
core.sh:862: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(BE1203 13:10:08.810763   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Name:
matched Labels:
matched Selector:
matched IP:
matched Port:
matched Endpoints:
... skipping 10 lines ...
IP:                10.0.0.31
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
(BE1203 13:10:08.900354   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:866: Successful describe
Name:              redis-master
Namespace:         default
Labels:            app=redis
                   role=master
                   tier=backend
... skipping 4 lines ...
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
Events:            <none>
(B
E1203 13:10:09.018988   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:868: Successful describe
Name:              redis-master
Namespace:         default
Labels:            app=redis
                   role=master
                   tier=backend
... skipping 3 lines ...
IP:                10.0.0.31
Port:              <unset>  6379/TCP
TargetPort:        6379/TCP
Endpoints:         <none>
Session Affinity:  None
(B
E1203 13:10:09.132363   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:870: Successful describe
Name:              redis-master
Namespace:         default
Labels:            app=redis
                   role=master
                   tier=backend
... skipping 147 lines ...
  - port: 6379
    targetPort: 6379
  selector:
    role: padawan
status:
  loadBalancer: {}
E1203 13:10:09.812092   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-12-03T13:10:08Z"
  labels:
    app: redis
... skipping 13 lines ...
  selector:
    role: padawan
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
E1203 13:10:09.901601   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-master selector updated
core.sh:890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan:
(BE1203 13:10:10.020245   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/redis-master selector updated
E1203 13:10:10.133587   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:894: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(BapiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-12-03T13:10:08Z"
  labels:
... skipping 14 lines ...
  selector:
    role: padawan
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:898: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
I1203 13:10:10.771790   54248 namespace_controller.go:185] Namespace has been deleted test-jobs
E1203 13:10:10.813352   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:911: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(BE1203 13:10:10.902834   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "redis-master" deleted
E1203 13:10:11.021526   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:918: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE1203 13:10:11.134834   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:922: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/redis-master created
core.sh:926: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bcore.sh:930: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice/service-v1-test created
core.sh:951: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
(BE1203 13:10:11.814811   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:11.904208   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/service-v1-test replaced
E1203 13:10:12.022923   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:958: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
(BE1203 13:10:12.136133   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "redis-master" deleted
service "service-v1-test" deleted
core.sh:966: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:970: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/redis-master created
service/redis-slave created
E1203 13:10:12.816063   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:12.905544   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:975: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
(BSuccessful
message:NAME           RSRC
kubernetes     144
redis-master   1562
redis-slave    1565
has:redis-master
E1203 13:10:13.024282   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:985: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
(BE1203 13:10:13.137499   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "redis-master" deleted
service "redis-slave" deleted
core.sh:992: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:996: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/beep-boop created
core.sh:1000: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
(Bcore.sh:1004: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: beep-boop:kubernetes:
(Bservice "beep-boop" deleted
E1203 13:10:13.817597   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1011: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BE1203 13:10:13.906757   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1015: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1203 13:10:14.025751   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
I1203 13:10:14.042203   54248 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"7a072613-2912-40e8-b428-e3f708541410", APIVersion:"apps/v1", ResourceVersion:"1578", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-bd968f46 to 2
I1203 13:10:14.049331   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"2cee6a01-8447-4cbb-90ae-d969d050d059", APIVersion:"apps/v1", ResourceVersion:"1579", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-fd4tt
I1203 13:10:14.053406   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-bd968f46", UID:"2cee6a01-8447-4cbb-90ae-d969d050d059", APIVersion:"apps/v1", ResourceVersion:"1579", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-bd968f46-fg2br
service/testmetadata created
deployment.apps/testmetadata created
E1203 13:10:14.138971   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1019: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
(Bcore.sh:1020: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
(Bservice/exposemetadata exposed
core.sh:1026: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
(Bservice "exposemetadata" deleted
service "testmetadata" deleted
... skipping 3 lines ...
Running command: run_daemonset_tests

+++ Running case: test-cmd.run_daemonset_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_daemonset_tests
+++ [1203 13:10:14] Creating namespace namespace-1575378614-18364
E1203 13:10:14.818865   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1575378614-18364 created
Context "test" modified.
+++ [1203 13:10:14] Testing kubectl(v1:daemonsets)
E1203 13:10:14.908191   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:30: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1203 13:10:15.027089   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:15.140424   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1203 13:10:15.167690   50788 controller.go:606] quota admission added evaluator for: daemonsets.apps
daemonset.apps/bind created
I1203 13:10:15.177256   50788 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
(Bdaemonset.apps/bind configured
apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
(Bdaemonset.apps/bind image updated
apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
(BE1203 13:10:15.820264   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind env updated
E1203 13:10:15.909345   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
(BE1203 13:10:16.028229   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind resource requirements updated
E1203 13:10:16.141841   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
(Bdaemonset.apps/bind restarted
apps.sh:48: Successful get daemonsets bind {{.metadata.generation}}: 5
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_daemonset_history_tests
... skipping 3 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_daemonset_history_tests
+++ [1203 13:10:16] Creating namespace namespace-1575378616-22612
namespace/namespace-1575378616-22612 created
Context "test" modified.
+++ [1203 13:10:16] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
E1203 13:10:16.821693   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1203 13:10:16.910666   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:17.029592   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind created
E1203 13:10:17.143369   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1575378616-22612"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(Bdaemonset.apps/bind skipped rollback (current template already matches revision 1)
apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind configured
apps.sh:77: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(BE1203 13:10:17.823130   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(BE1203 13:10:17.912151   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:79: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(BE1203 13:10:18.030814   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:80: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1575378616-22612"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1575378616-22612"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:latest","name":"kubernetes-pause"},{"image":"k8s.gcr.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(Bdaemonset.apps/bind will roll back to Pod Template:
  Labels:	service=bind
  Containers:
... skipping 2 lines ...
    Port:	<none>
    Host Port:	<none>
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>
 (dry run)
E1203 13:10:18.144573   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:83: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
E1203 13:10:18.555509   54248 daemon_controller.go:290] namespace-1575378616-22612/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1575378616-22612", SelfLink:"/apis/apps/v1/namespaces/namespace-1575378616-22612/daemonsets/bind", UID:"9f9d0d66-8637-47a3-b4db-06bf8f57f4af", ResourceVersion:"1647", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63710975417, loc:(*time.Location)(0x6b84320)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1575378616-22612\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0008f1980), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002a805a8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00256a060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0008f19c0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0008602d8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002a8060c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BE1203 13:10:18.824477   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
E1203 13:10:18.913901   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(BE1203 13:10:19.032069   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:94: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BE1203 13:10:19.145597   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind rolled back
apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:99: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
... skipping 4 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rc_tests
+++ [1203 13:10:19] Creating namespace namespace-1575378619-32691
namespace/namespace-1575378619-32691 created
Context "test" modified.
+++ [1203 13:10:19] Testing kubectl(v1:replicationcontrollers)
E1203 13:10:19.825769   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1052: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1203 13:10:19.915203   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:20.033307   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend created
I1203 13:10:20.053532   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"frontend", UID:"916ec1ae-485b-4816-988c-a8761c32edcb", APIVersion:"v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qnxq2
I1203 13:10:20.056052   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"frontend", UID:"916ec1ae-485b-4816-988c-a8761c32edcb", APIVersion:"v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-b7zr8
I1203 13:10:20.057368   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"frontend", UID:"916ec1ae-485b-4816-988c-a8761c32edcb", APIVersion:"v1", ResourceVersion:"1658", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9b9fs
replicationcontroller "frontend" deleted
E1203 13:10:20.148444   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1057: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:1061: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/frontend created
I1203 13:10:20.499455   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"frontend", UID:"e000263a-2087-42b7-95e8-760663039a48", APIVersion:"v1", ResourceVersion:"1676", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-wm5b4
I1203 13:10:20.502008   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"frontend", UID:"e000263a-2087-42b7-95e8-760663039a48", APIVersion:"v1", ResourceVersion:"1676", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4lphk
I1203 13:10:20.502395   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"frontend", UID:"e000263a-2087-42b7-95e8-760663039a48", APIVersion:"v1", ResourceVersion:"1676", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mkd9b
... skipping 11 lines ...
Namespace:    namespace-1575378619-32691
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 9 lines ...
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-wm5b4
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-4lphk
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-mkd9b
(BE1203 13:10:20.826924   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1069: Successful describe
Name:         frontend
Namespace:    namespace-1575378619-32691
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-wm5b4
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-4lphk
  Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-mkd9b
(B
E1203 13:10:20.916461   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1071: Successful describe
Name:         frontend
Namespace:    namespace-1575378619-32691
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 4 lines ...
      memory:  100Mi
    Environment:
      GET_HOSTS_FROM:  dns
    Mounts:            <none>
  Volumes:             <none>
(B
E1203 13:10:21.034356   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1073: Successful describe
Name:         frontend
Namespace:    namespace-1575378619-32691
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-wm5b4
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-4lphk
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-mkd9b
(B
E1203 13:10:21.149526   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
matched Name:
matched Name:
matched Pod Template:
matched Labels:
matched Selector:
matched Replicas:
... skipping 5 lines ...
Namespace:    namespace-1575378619-32691
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1575378619-32691
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1575378619-32691
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1575378619-32691
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 14 lines ...
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-mkd9b
(Bcore.sh:1085: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E1203 13:10:21.699123   54248 replica_set.go:199] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1575378619-32691 /api/v1/namespaces/namespace-1575378619-32691/replicationcontrollers/frontend e000263a-2087-42b7-95e8-760663039a48 1687 2 2019-12-03 13:10:20 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002d84cd8 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I1203 13:10:21.706259   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"frontend", UID:"e000263a-2087-42b7-95e8-760663039a48", APIVersion:"v1", ResourceVersion:"1687", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-wm5b4
core.sh:1089: Successful get rc frontend {{.spec.replicas}}: 2
(BE1203 13:10:21.828386   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1093: Successful get rc frontend {{.spec.replicas}}: 2
(BE1203 13:10:21.917741   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
error: Expected replicas to be 3, was 2
E1203 13:10:22.035619   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1097: Successful get rc frontend {{.spec.replicas}}: 2
(BE1203 13:10:22.150982   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1101: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I1203 13:10:22.275557   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"frontend", UID:"e000263a-2087-42b7-95e8-760663039a48", APIVersion:"v1", ResourceVersion:"1693", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vfzwq
core.sh:1105: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1109: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E1203 13:10:22.552088   54248 replica_set.go:199] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1575378619-32691 /api/v1/namespaces/namespace-1575378619-32691/replicationcontrollers/frontend e000263a-2087-42b7-95e8-760663039a48 1700 4 2019-12-03 13:10:20 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0029ef578 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I1203 13:10:22.558025   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"frontend", UID:"e000263a-2087-42b7-95e8-760663039a48", APIVersion:"v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-vfzwq
core.sh:1113: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller "frontend" deleted
E1203 13:10:22.829687   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/redis-master created
I1203 13:10:22.909127   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"redis-master", UID:"d816572b-debb-4a12-864c-51dc9617a27d", APIVersion:"v1", ResourceVersion:"1712", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-hppr8
E1203 13:10:22.918988   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:23.037011   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/redis-slave created
I1203 13:10:23.085766   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"redis-slave", UID:"7856e46f-36eb-4610-902f-e0d65278c045", APIVersion:"v1", ResourceVersion:"1717", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-mfpfg
I1203 13:10:23.088364   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"redis-slave", UID:"7856e46f-36eb-4610-902f-e0d65278c045", APIVersion:"v1", ResourceVersion:"1717", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-mctjg
E1203 13:10:23.152209   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/redis-master scaled
I1203 13:10:23.182986   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"redis-master", UID:"d816572b-debb-4a12-864c-51dc9617a27d", APIVersion:"v1", ResourceVersion:"1724", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-7mksm
replicationcontroller/redis-slave scaled
I1203 13:10:23.185881   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"redis-master", UID:"d816572b-debb-4a12-864c-51dc9617a27d", APIVersion:"v1", ResourceVersion:"1724", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-q7rrn
I1203 13:10:23.187604   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"redis-slave", UID:"7856e46f-36eb-4610-902f-e0d65278c045", APIVersion:"v1", ResourceVersion:"1726", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-r866s
I1203 13:10:23.188184   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"redis-master", UID:"d816572b-debb-4a12-864c-51dc9617a27d", APIVersion:"v1", ResourceVersion:"1724", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-fk6jw
... skipping 8 lines ...
I1203 13:10:23.649289   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378619-32691", Name:"nginx-deployment-6986c7bc94", UID:"56fe4e54-a50e-4df7-8168-88ee370b9e77", APIVersion:"apps/v1", ResourceVersion:"1761", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-2pxc9
I1203 13:10:23.649480   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378619-32691", Name:"nginx-deployment-6986c7bc94", UID:"56fe4e54-a50e-4df7-8168-88ee370b9e77", APIVersion:"apps/v1", ResourceVersion:"1761", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-g6djq
deployment.apps/nginx-deployment scaled
I1203 13:10:23.738426   54248 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1575378619-32691", Name:"nginx-deployment", UID:"968fe304-6068-4f8c-aa8a-fdfac02c136d", APIVersion:"apps/v1", ResourceVersion:"1774", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-6986c7bc94 to 1
I1203 13:10:23.744808   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378619-32691", Name:"nginx-deployment-6986c7bc94", UID:"56fe4e54-a50e-4df7-8168-88ee370b9e77", APIVersion:"apps/v1", ResourceVersion:"1775", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-2pxc9
I1203 13:10:23.745654   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378619-32691", Name:"nginx-deployment-6986c7bc94", UID:"56fe4e54-a50e-4df7-8168-88ee370b9e77", APIVersion:"apps/v1", ResourceVersion:"1775", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6986c7bc94-g6djq
E1203 13:10:23.830986   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1133: Successful get deployment nginx-deployment {{.spec.replicas}}: 1
(Bdeployment.apps "nginx-deployment" deleted
E1203 13:10:23.919811   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
E1203 13:10:24.038241   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "expose-test-deployment" deleted
E1203 13:10:24.153601   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I1203 13:10:24.383354   54248 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1575378619-32691", Name:"nginx-deployment", UID:"b5a8a0a7-f20f-4749-9ca7-940adb05b520", APIVersion:"apps/v1", ResourceVersion:"1800", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I1203 13:10:24.388009   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378619-32691", Name:"nginx-deployment-6986c7bc94", UID:"575500a9-8e6b-4802-80ce-536e55e966a8", APIVersion:"apps/v1", ResourceVersion:"1801", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-gc5zz
I1203 13:10:24.391453   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378619-32691", Name:"nginx-deployment-6986c7bc94", UID:"575500a9-8e6b-4802-80ce-536e55e966a8", APIVersion:"apps/v1", ResourceVersion:"1801", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-q577z
I1203 13:10:24.392878   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1575378619-32691", Name:"nginx-deployment-6986c7bc94", UID:"575500a9-8e6b-4802-80ce-536e55e966a8", APIVersion:"apps/v1", ResourceVersion:"1801", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-4dd6k
core.sh:1152: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
(Bservice/nginx-deployment exposed
core.sh:1156: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
(Bdeployment.apps "nginx-deployment" deleted
service "nginx-deployment" deleted
E1203 13:10:24.832290   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1203 13:10:24.920978   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend created
I1203 13:10:24.953050   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"frontend", UID:"9341b53b-6212-47d5-b481-705b8f020e7e", APIVersion:"v1", ResourceVersion:"1828", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6fpng
I1203 13:10:24.958582   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"frontend", UID:"9341b53b-6212-47d5-b481-705b8f020e7e", APIVersion:"v1", ResourceVersion:"1828", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-k8mfc
I1203 13:10:24.958779   54248 event.go:281] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1575378619-32691", Name:"frontend", UID:"9341b53b-6212-47d5-b481-705b8f020e7e", APIVersion:"v1", ResourceVersion:"1828", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vnrcd
E1203 13:10:25.039803   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1163: Successful get rc frontend {{.spec.replicas}}: 3
(Bservice/frontend exposed
E1203 13:10:25.154731   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1167: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bservice/frontend-2 exposed
core.sh:1171: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
(Bpod/valid-pod created
service/frontend-3 exposed
E1203 13:10:25.833571   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1176: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 444
(BE1203 13:10:25.922180   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/frontend-4 exposed
E1203 13:10:26.040934   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1180: Successful get service frontend-4 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
(BE1203 13:10:26.156076   54248 reflector.go:156] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/frontend-5 exposed
core.sh:1184: