This job view page is being replaced by Spyglass soon. Check out the new job view.
PRdraveness: feat: update taint nodes by condition to GA
ResultFAILURE
Tests 3 failed / 2896 succeeded
Started2019-10-18 09:46
Elapsed27m37s
Revision
Buildergke-prow-ssd-pool-1a225945-d0kf
Refs master:b5740749
82703:f7290589
pod0556af92-f18c-11e9-b832-42d776dec259
infra-commit1994b1dd5
pod0556af92-f18c-11e9-b832-42d776dec259
repok8s.io/kubernetes
repo-commitc7a4be70d760177c3be01bad8effc5a738f47d89
repos{u'k8s.io/kubernetes': u'master:b574074981a8fd0bdfb35090560b2d433b975e8d,82703:f729058983de685d8ddf6d398d7a45a6e5081536'}

Test Failures


k8s.io/kubernetes/test/integration/apiserver/admissionwebhook TestWebhookTimeoutWithoutWatchCache 14s

go test -v k8s.io/kubernetes/test/integration/apiserver/admissionwebhook -run TestWebhookTimeoutWithoutWatchCache$
=== RUN   TestWebhookTimeoutWithoutWatchCache
I1018 10:05:50.615499  100552 serving.go:313] Generated self-signed cert (/tmp/kubernetes-kube-apiserver343669285/apiserver.crt, /tmp/kubernetes-kube-apiserver343669285/apiserver.key)
I1018 10:05:50.615522  100552 server.go:622] external host was not specified, using 127.0.0.1
W1018 10:05:50.615531  100552 authentication.go:424] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W1018 10:05:50.990821  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:05:50.990853  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:05:50.990863  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:05:50.991041  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:05:50.991873  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:05:50.991928  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:05:50.991948  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:05:50.991968  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:05:50.992162  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:05:50.992283  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:05:50.992376  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:05:50.992435  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 10:05:50.992450  100552 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1018 10:05:50.992457  100552 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1018 10:05:50.993244  100552 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1018 10:05:50.993262  100552 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1018 10:05:50.994657  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:50.994690  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:50.996023  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:50.996056  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1018 10:05:51.020564  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 10:05:51.021554  100552 master.go:261] Using reconciler: lease
I1018 10:05:51.021819  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.021850  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.024449  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.024618  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.025753  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.025785  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.026641  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.026670  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.027440  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.027467  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.028177  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.028200  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.029148  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.029178  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.029971  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.030099  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.031057  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.031094  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.031767  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.031795  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.032629  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.032658  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.034143  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.034173  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.036276  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.036384  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.038719  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.038847  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.040576  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.040608  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.041870  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.041915  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.043611  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.043643  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.045257  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.045294  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.046055  100552 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1018 10:05:51.154315  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.154355  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.155828  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.155860  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.157687  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.157722  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.158719  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.158861  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.160545  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.160588  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.161679  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.161712  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.162833  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.162874  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.163822  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.163849  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.164777  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.164810  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.165812  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.165845  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.166788  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.166813  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.167625  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.167659  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.168469  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.168504  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.169857  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.169915  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.170637  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.170661  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.171446  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.171481  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.172321  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.172352  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.173242  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.173272  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.174838  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.174868  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.176346  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.176383  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.177490  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.177523  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.179759  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.179790  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.180537  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.180561  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.181352  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.181383  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.182128  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.182256  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.183043  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.183079  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.184219  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.184255  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.184933  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.184954  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.185642  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.185669  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.186669  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.186698  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.189774  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.189811  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.190835  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.190862  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.191805  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.191833  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.192604  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.192631  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.193755  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.193785  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.194546  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.194580  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.195517  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.195542  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.196192  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.196222  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.196862  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.197021  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.197763  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.197789  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.199119  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.199147  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.200058  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.200084  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.200799  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.200826  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.201720  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.201752  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.202433  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.202454  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.203327  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.203354  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.204476  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.204508  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.205434  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.205461  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.206729  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.206755  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.207594  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.207620  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.208727  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.208756  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.209448  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.209472  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.210406  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.210433  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.211262  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.211298  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.212275  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.212303  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.213056  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.213079  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.213610  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.213649  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.214208  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.214229  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.215086  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.215112  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.556625  100552 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1018 10:05:51.556652  100552 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
W1018 10:05:51.558177  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 10:05:51.558448  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.558484  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:51.559440  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.559486  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1018 10:05:51.562236  100552 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 10:05:51.991991  100552 client.go:357] parsed scheme: "endpoint"
I1018 10:05:51.992298  100552 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:05:54.796422  100552 secure_serving.go:149] Serving securely on 127.0.0.1:34921
I1018 10:05:54.796492  100552 tlsconfig.go:198] Starting DynamicServingCertificateController
I1018 10:05:54.796581  100552 apiservice_controller.go:94] Starting APIServiceRegistrationController
I1018 10:05:54.796623  100552 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1018 10:05:54.796742  100552 available_controller.go:386] Starting AvailableConditionController
I1018 10:05:54.796770  100552 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1018 10:05:54.797197  100552 crd_finalizer.go:263] Starting CRDFinalizer
I1018 10:05:54.797219  100552 naming_controller.go:288] Starting NamingConditionController
I1018 10:05:54.797226  100552 customresource_discovery_controller.go:208] Starting DiscoveryController
I1018 10:05:54.797235  100552 establishing_controller.go:73] Starting EstablishingController
I1018 10:05:54.797250  100552 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I1018 10:05:54.797265  100552 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1018 10:05:54.797292  100552 controller.go:85] Starting OpenAPI controller
I1018 10:05:54.797414  100552 autoregister_controller.go:140] Starting autoregister controller
I1018 10:05:54.797426  100552 controller.go:81] Starting OpenAPI AggregationController
I1018 10:05:54.797661  100552 crdregistration_controller.go:111] Starting crd-autoregister controller
I1018 10:05:54.797675  100552 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I1018 10:05:54.797426  100552 cache.go:32] Waiting for caches to sync for autoregister controller
E1018 10:05:54.809540  100552 controller.go:156] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /ed6c0525-2c6e-4d2a-8381-053744c0d86b/registry/masterleases/127.0.0.1, ResourceVersion: 0, AdditionalErrorMsg: 
I1018 10:05:54.896970  100552 cache.go:39] Caches are synced for AvailableConditionController controller
I1018 10:05:54.897014  100552 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1018 10:05:54.897844  100552 shared_informer.go:204] Caches are synced for crd-autoregister 
I1018 10:05:54.897965  100552 cache.go:39] Caches are synced for autoregister controller
I1018 10:05:55.796302  100552 controller.go:107] OpenAPI AggregationController: Processing item 
I1018 10:05:55.796338  100552 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1018 10:05:55.796355  100552 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1018 10:05:55.801909  100552 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1018 10:05:55.805195  100552 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1018 10:05:55.805315  100552 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
W1018 10:05:55.847034  100552 lease.go:222] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E1018 10:05:55.847929  100552 controller.go:227] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
--- FAIL: TestWebhookTimeoutWithoutWatchCache (14.46s)
    testserver.go:143: runtime-config=map[api/all:true]
    testserver.go:144: Starting kube-apiserver on port 34921...
    testserver.go:160: Waiting for /healthz to be ok...

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20191018-100212.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/apiserver/admissionwebhook TestWebhookTimeoutWithoutWatchCache/minimum_of_request_timeout_or_webhook_timeout_propagated 0.03s

go test -v k8s.io/kubernetes/test/integration/apiserver/admissionwebhook -run TestWebhookTimeoutWithoutWatchCache/minimum_of_request_timeout_or_webhook_timeout_propagated$
=== RUN   TestWebhookTimeoutWithoutWatchCache/minimum_of_request_timeout_or_webhook_timeout_propagated
    --- FAIL: TestWebhookTimeoutWithoutWatchCache/minimum_of_request_timeout_or_webhook_timeout_propagated (0.03s)
        timeout_test.go:321: expected invocation of /mutating/1/0s, got /validating/3/0s
        timeout_test.go:321: expected invocation of /mutating/2/0s, got /validating/4/0s
        timeout_test.go:316: expected invocation of /validating/3/0s, got none
        timeout_test.go:316: expected invocation of /validating/4/0s, got none

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20191018-100212.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestSchedulerCreationFromConfigMap 4.30s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestSchedulerCreationFromConfigMap$
=== RUN   TestSchedulerCreationFromConfigMap
W1018 10:11:09.473105  104023 services.go:35] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1018 10:11:09.473135  104023 services.go:47] Setting service IP to "10.0.0.1" (read-write).
I1018 10:11:09.473156  104023 master.go:305] Node port range unspecified. Defaulting to 30000-32767.
I1018 10:11:09.473167  104023 master.go:261] Using reconciler: 
I1018 10:11:09.474970  104023 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.475309  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.475644  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.476524  104023 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1018 10:11:09.476580  104023 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.476864  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.476907  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.476992  104023 reflector.go:185] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1018 10:11:09.478218  104023 watch_cache.go:409] Replace watchCache (rev: 44714) 
I1018 10:11:09.478544  104023 store.go:1342] Monitoring events count at <storage-prefix>//events
I1018 10:11:09.478603  104023 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1018 10:11:09.478597  104023 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.478853  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.478901  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.479693  104023 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1018 10:11:09.479743  104023 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.479757  104023 reflector.go:185] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1018 10:11:09.479861  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.479894  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.479872  104023 watch_cache.go:409] Replace watchCache (rev: 44714) 
I1018 10:11:09.480826  104023 watch_cache.go:409] Replace watchCache (rev: 44714) 
I1018 10:11:09.483226  104023 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1018 10:11:09.483347  104023 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.483494  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.483507  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.483563  104023 reflector.go:185] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1018 10:11:09.489853  104023 watch_cache.go:409] Replace watchCache (rev: 44714) 
I1018 10:11:09.490281  104023 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1018 10:11:09.490321  104023 reflector.go:185] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1018 10:11:09.490424  104023 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.490527  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.490541  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.491498  104023 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1018 10:11:09.491754  104023 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.492038  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.492086  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.492328  104023 reflector.go:185] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1018 10:11:09.493774  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.494404  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.494522  104023 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1018 10:11:09.494782  104023 reflector.go:185] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1018 10:11:09.494813  104023 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.495000  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.495022  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.496261  104023 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1018 10:11:09.496415  104023 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.496536  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.496553  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.496630  104023 reflector.go:185] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1018 10:11:09.497237  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.497254  104023 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1018 10:11:09.497406  104023 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.497502  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.497518  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.497563  104023 reflector.go:185] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1018 10:11:09.498415  104023 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1018 10:11:09.498573  104023 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.498618  104023 reflector.go:185] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1018 10:11:09.498713  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.498728  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.498741  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.500432  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.500544  104023 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1018 10:11:09.500722  104023 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.500813  104023 reflector.go:185] Listing and watching *core.Node from storage/cacher.go:/minions
I1018 10:11:09.500840  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.500854  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.500572  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.503026  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.503211  104023 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1018 10:11:09.503361  104023 reflector.go:185] Listing and watching *core.Pod from storage/cacher.go:/pods
I1018 10:11:09.503363  104023 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.503480  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.503716  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.505227  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.508571  104023 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1018 10:11:09.508727  104023 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.509085  104023 reflector.go:185] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1018 10:11:09.509255  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.509303  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.509830  104023 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1018 10:11:09.509872  104023 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.510031  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.510047  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.510118  104023 reflector.go:185] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1018 10:11:09.510634  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.511217  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.511835  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.511846  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.512631  104023 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.512758  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.512775  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.514067  104023 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1018 10:11:09.514091  104023 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1018 10:11:09.514196  104023 reflector.go:185] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1018 10:11:09.514602  104023 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.514844  104023 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.515780  104023 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.515874  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.516478  104023 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.517167  104023 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.517855  104023 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.518841  104023 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.519552  104023 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.519991  104023 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.520689  104023 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.521715  104023 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.522109  104023 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.523440  104023 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.524166  104023 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.524946  104023 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.525499  104023 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.526751  104023 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.527464  104023 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.527992  104023 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.528327  104023 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.528709  104023 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.530004  104023 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.530323  104023 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.531234  104023 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.531857  104023 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.532911  104023 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.534015  104023 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.534544  104023 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.535032  104023 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.536264  104023 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.536663  104023 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.537466  104023 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.538370  104023 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.539227  104023 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.540301  104023 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.540654  104023 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.540774  104023 master.go:453] Skipping disabled API group "auditregistration.k8s.io".
I1018 10:11:09.540856  104023 master.go:464] Enabling API group "authentication.k8s.io".
I1018 10:11:09.540927  104023 master.go:464] Enabling API group "authorization.k8s.io".
I1018 10:11:09.541150  104023 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.541446  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.541538  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.542398  104023 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1018 10:11:09.542471  104023 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1018 10:11:09.542636  104023 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.542774  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.542797  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.544378  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.545338  104023 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1018 10:11:09.545501  104023 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1018 10:11:09.545509  104023 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.545653  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.545675  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.546706  104023 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1018 10:11:09.546731  104023 master.go:464] Enabling API group "autoscaling".
I1018 10:11:09.546951  104023 reflector.go:185] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1018 10:11:09.546952  104023 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.547085  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.547110  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.547159  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.548611  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.548934  104023 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1018 10:11:09.549087  104023 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.549203  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.549221  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.549302  104023 reflector.go:185] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1018 10:11:09.550256  104023 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1018 10:11:09.550277  104023 master.go:464] Enabling API group "batch".
I1018 10:11:09.550420  104023 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.550520  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.550537  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.550608  104023 reflector.go:185] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1018 10:11:09.551148  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.554667  104023 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1018 10:11:09.554694  104023 master.go:464] Enabling API group "certificates.k8s.io".
I1018 10:11:09.554700  104023 reflector.go:185] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1018 10:11:09.554853  104023 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.555170  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.555197  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.555678  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.556296  104023 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1018 10:11:09.556454  104023 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.556582  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.556605  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.556678  104023 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1018 10:11:09.559349  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.559390  104023 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1018 10:11:09.559409  104023 master.go:464] Enabling API group "coordination.k8s.io".
I1018 10:11:09.559424  104023 master.go:453] Skipping disabled API group "discovery.k8s.io".
I1018 10:11:09.559574  104023 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.559678  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.559695  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.559761  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.559770  104023 reflector.go:185] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1018 10:11:09.560714  104023 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1018 10:11:09.560744  104023 master.go:464] Enabling API group "extensions".
I1018 10:11:09.561028  104023 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.561161  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.561178  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.561250  104023 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1018 10:11:09.561923  104023 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1018 10:11:09.562068  104023 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.562575  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.562602  104023 reflector.go:185] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1018 10:11:09.563018  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.563043  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.564829  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.565911  104023 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1018 10:11:09.565931  104023 master.go:464] Enabling API group "networking.k8s.io".
I1018 10:11:09.565983  104023 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.566086  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.566106  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.566196  104023 reflector.go:185] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1018 10:11:09.566493  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.568227  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.568245  104023 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1018 10:11:09.568268  104023 master.go:464] Enabling API group "node.k8s.io".
I1018 10:11:09.568438  104023 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.568561  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.568578  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.568652  104023 reflector.go:185] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1018 10:11:09.569819  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.570150  104023 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1018 10:11:09.570299  104023 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.570411  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.570428  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.570506  104023 reflector.go:185] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1018 10:11:09.571918  104023 watch_cache.go:409] Replace watchCache (rev: 44716) 
I1018 10:11:09.572510  104023 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1018 10:11:09.572526  104023 master.go:464] Enabling API group "policy".
I1018 10:11:09.572602  104023 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.572725  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.572748  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.572829  104023 reflector.go:185] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1018 10:11:09.574176  104023 watch_cache.go:409] Replace watchCache (rev: 44717) 
I1018 10:11:09.574449  104023 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1018 10:11:09.574487  104023 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1018 10:11:09.574626  104023 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.574728  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.574744  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.575721  104023 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1018 10:11:09.575775  104023 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.575912  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.575928  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.575994  104023 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1018 10:11:09.578194  104023 watch_cache.go:409] Replace watchCache (rev: 44717) 
I1018 10:11:09.578216  104023 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1018 10:11:09.578368  104023 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.578500  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.578516  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.578596  104023 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1018 10:11:09.579136  104023 watch_cache.go:409] Replace watchCache (rev: 44717) 
I1018 10:11:09.580110  104023 watch_cache.go:409] Replace watchCache (rev: 44717) 
I1018 10:11:09.580853  104023 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1018 10:11:09.580932  104023 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.581049  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.581066  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.581148  104023 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1018 10:11:09.586822  104023 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1018 10:11:09.587007  104023 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.587130  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.587150  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.587246  104023 reflector.go:185] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1018 10:11:09.590094  104023 watch_cache.go:409] Replace watchCache (rev: 44718) 
I1018 10:11:09.591520  104023 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1018 10:11:09.591577  104023 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.591709  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.591726  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.591805  104023 reflector.go:185] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1018 10:11:09.592183  104023 watch_cache.go:409] Replace watchCache (rev: 44718) 
I1018 10:11:09.594063  104023 watch_cache.go:409] Replace watchCache (rev: 44718) 
I1018 10:11:09.594393  104023 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1018 10:11:09.594576  104023 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.594664  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.594676  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.594733  104023 reflector.go:185] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1018 10:11:09.597564  104023 watch_cache.go:409] Replace watchCache (rev: 44718) 
I1018 10:11:09.597996  104023 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1018 10:11:09.598024  104023 master.go:464] Enabling API group "rbac.authorization.k8s.io".
I1018 10:11:09.598489  104023 reflector.go:185] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1018 10:11:09.600607  104023 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.600922  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.601041  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.601492  104023 watch_cache.go:409] Replace watchCache (rev: 44718) 
I1018 10:11:09.608217  104023 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1018 10:11:09.608400  104023 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.608519  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.608546  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.608627  104023 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1018 10:11:09.618076  104023 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1018 10:11:09.618102  104023 master.go:464] Enabling API group "scheduling.k8s.io".
I1018 10:11:09.618201  104023 master.go:453] Skipping disabled API group "settings.k8s.io".
I1018 10:11:09.618336  104023 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.618455  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.618478  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.618543  104023 reflector.go:185] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1018 10:11:09.621130  104023 watch_cache.go:409] Replace watchCache (rev: 44718) 
I1018 10:11:09.621427  104023 watch_cache.go:409] Replace watchCache (rev: 44718) 
I1018 10:11:09.621627  104023 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1018 10:11:09.621749  104023 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.621915  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.621938  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.622021  104023 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1018 10:11:09.625287  104023 watch_cache.go:409] Replace watchCache (rev: 44718) 
I1018 10:11:09.625684  104023 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1018 10:11:09.625949  104023 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.626298  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.626475  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.625715  104023 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1018 10:11:09.637628  104023 watch_cache.go:409] Replace watchCache (rev: 44719) 
I1018 10:11:09.638207  104023 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1018 10:11:09.638361  104023 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.638491  104023 reflector.go:185] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1018 10:11:09.638759  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.638868  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.640404  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.641572  104023 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1018 10:11:09.641618  104023 reflector.go:185] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1018 10:11:09.641778  104023 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.642152  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.642181  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.644033  104023 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1018 10:11:09.644182  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.644223  104023 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.644337  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.644357  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.644373  104023 reflector.go:185] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1018 10:11:09.645802  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.646453  104023 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1018 10:11:09.646479  104023 master.go:464] Enabling API group "storage.k8s.io".
I1018 10:11:09.646483  104023 reflector.go:185] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1018 10:11:09.646623  104023 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.646772  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.646795  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.647591  104023 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1018 10:11:09.647748  104023 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.647891  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.647910  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.647976  104023 reflector.go:185] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1018 10:11:09.650044  104023 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1018 10:11:09.650213  104023 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.650310  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.650335  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.650418  104023 reflector.go:185] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1018 10:11:09.650843  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.653205  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.653605  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.653672  104023 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1018 10:11:09.654112  104023 reflector.go:185] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1018 10:11:09.654125  104023 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.654234  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.654250  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.655267  104023 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1018 10:11:09.655287  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.655344  104023 reflector.go:185] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1018 10:11:09.655436  104023 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.655638  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.655665  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.656983  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.657091  104023 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1018 10:11:09.657108  104023 master.go:464] Enabling API group "apps".
I1018 10:11:09.657154  104023 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.657243  104023 reflector.go:185] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1018 10:11:09.657277  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.657293  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.658785  104023 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1018 10:11:09.658837  104023 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.658930  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.658980  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.658999  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.659074  104023 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1018 10:11:09.660944  104023 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1018 10:11:09.660997  104023 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.661684  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.661706  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.661785  104023 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1018 10:11:09.664703  104023 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1018 10:11:09.664742  104023 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.664868  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.664892  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.664943  104023 reflector.go:185] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1018 10:11:09.665306  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.666103  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.668172  104023 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1018 10:11:09.668189  104023 master.go:464] Enabling API group "admissionregistration.k8s.io".
I1018 10:11:09.668227  104023 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.668391  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:09.668403  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:09.668464  104023 reflector.go:185] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1018 10:11:09.668937  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.669255  104023 store.go:1342] Monitoring events count at <storage-prefix>//events
I1018 10:11:09.669267  104023 master.go:464] Enabling API group "events.k8s.io".
I1018 10:11:09.669442  104023 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.669568  104023 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.669773  104023 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.669898  104023 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.670028  104023 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.670141  104023 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.670318  104023 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.670419  104023 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.670515  104023 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.670619  104023 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.671537  104023 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.671790  104023 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.672661  104023 reflector.go:185] Listing and watching *core.Event from storage/cacher.go:/events
I1018 10:11:09.673785  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.674973  104023 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.675225  104023 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.675945  104023 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.676015  104023 watch_cache.go:409] Replace watchCache (rev: 44721) 
I1018 10:11:09.676272  104023 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.677135  104023 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.677469  104023 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.678231  104023 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.678619  104023 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1018 10:11:09.678801  104023 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1018 10:11:09.680148  104023 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.680434  104023 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.680831  104023 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.681595  104023 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.682355  104023 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.683235  104023 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.683535  104023 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.685404  104023 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.689677  104023 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.690145  104023 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.691533  104023 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1018 10:11:09.691875  104023 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1018 10:11:09.693277  104023 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.695259  104023 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.696983  104023 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.699133  104023 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.699828  104023 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.701579  104023 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.719127  104023 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.719985  104023 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.740336  104023 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.741374  104023 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.742215  104023 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1018 10:11:09.742408  104023 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1018 10:11:09.743242  104023 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.743993  104023 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1018 10:11:09.744155  104023 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1018 10:11:09.744772  104023 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.745416  104023 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.745839  104023 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.746492  104023 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.747113  104023 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.747731  104023 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.748378  104023 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1018 10:11:09.748542  104023 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1018 10:11:09.749523  104023 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.750292  104023 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.750668  104023 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.751443  104023 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.751869  104023 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.752256  104023 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.753066  104023 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.753500  104023 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.753855  104023 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.754710  104023 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.755076  104023 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.755501  104023 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1018 10:11:09.755664  104023 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1018 10:11:09.755746  104023 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1018 10:11:09.756563  104023 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.757253  104023 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.758105  104023 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.758805  104023 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.759665  104023 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"452ce914-8eb2-4936-8d82-079144f9d389", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", CAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1018 10:11:09.763622  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:09.763654  104023 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1018 10:11:09.763664  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:09.763674  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:09.763683  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:09.763690  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:09.763717  104023 httplog.go:90] GET /healthz: (202.63µs) 0 [Go-http-client/1.1 127.0.0.1:59838]
I1018 10:11:09.765149  104023 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.422783ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59840]
I1018 10:11:09.767745  104023 httplog.go:90] GET /api/v1/services: (1.144629ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59840]
I1018 10:11:09.771613  104023 httplog.go:90] GET /api/v1/services: (836.727µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59840]
I1018 10:11:09.774373  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:09.774400  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:09.774411  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:09.774420  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:09.774427  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:09.774449  104023 httplog.go:90] GET /healthz: (151.021µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59840]
I1018 10:11:09.775134  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.145064ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59838]
I1018 10:11:09.776199  104023 httplog.go:90] GET /api/v1/services: (1.557532ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59842]
I1018 10:11:09.776366  104023 httplog.go:90] GET /api/v1/services: (1.776293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:09.777251  104023 httplog.go:90] POST /api/v1/namespaces: (1.746704ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59838]
I1018 10:11:09.778473  104023 httplog.go:90] GET /api/v1/namespaces/kube-public: (934.448µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:09.779851  104023 httplog.go:90] POST /api/v1/namespaces: (1.036538ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:09.780772  104023 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (639.637µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:09.782490  104023 httplog.go:90] POST /api/v1/namespaces: (1.432553ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:09.865062  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:09.865095  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:09.865108  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:09.865124  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:09.865132  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:09.865160  104023 httplog.go:90] GET /healthz: (238.248µs) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:09.875081  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:09.875126  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:09.875139  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:09.875149  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:09.875167  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:09.875202  104023 httplog.go:90] GET /healthz: (260.434µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:09.965038  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:09.965071  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:09.965083  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:09.965092  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:09.965101  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:09.965129  104023 httplog.go:90] GET /healthz: (259.347µs) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:09.975098  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:09.975134  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:09.975152  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:09.975161  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:09.975169  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:09.975205  104023 httplog.go:90] GET /healthz: (262.903µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.065027  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:10.065058  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.065070  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.065080  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.065087  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.065113  104023 httplog.go:90] GET /healthz: (232.03µs) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:10.075192  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:10.075227  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.075239  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.075262  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.075270  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.075301  104023 httplog.go:90] GET /healthz: (235.672µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.165012  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:10.165041  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.165050  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.165056  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.165062  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.165093  104023 httplog.go:90] GET /healthz: (259.048µs) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:10.175210  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:10.175252  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.175265  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.175275  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.175283  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.175318  104023 httplog.go:90] GET /healthz: (247.366µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.265028  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:10.265058  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.265068  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.265074  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.265080  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.265111  104023 httplog.go:90] GET /healthz: (228.848µs) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:10.275018  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:10.275046  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.275055  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.275061  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.275067  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.275096  104023 httplog.go:90] GET /healthz: (189.861µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.365077  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:10.365117  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.365129  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.365138  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.365147  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.365180  104023 httplog.go:90] GET /healthz: (256.61µs) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:10.375046  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:10.375087  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.375096  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.375103  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.375108  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.375145  104023 httplog.go:90] GET /healthz: (223.989µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.465069  104023 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1018 10:11:10.465112  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.465124  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.465142  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.465150  104023 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.465203  104023 httplog.go:90] GET /healthz: (315.854µs) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:10.473205  104023 client.go:357] parsed scheme: "endpoint"
I1018 10:11:10.473272  104023 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1018 10:11:10.476057  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.476081  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.476090  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.476096  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.476122  104023 httplog.go:90] GET /healthz: (1.250185ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.565786  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.565814  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.565822  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.565828  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.565866  104023 httplog.go:90] GET /healthz: (1.023874ms) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:10.575650  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.575675  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.575686  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.575694  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.575727  104023 httplog.go:90] GET /healthz: (844.596µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.666087  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.666129  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.666139  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.666146  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.666186  104023 httplog.go:90] GET /healthz: (1.022772ms) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:10.676268  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.676299  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.676309  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.676318  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.676358  104023 httplog.go:90] GET /healthz: (1.330206ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.765212  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.41726ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:10.765243  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.795451ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.765516  104023 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-node-critical: (2.199294ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59840]
I1018 10:11:10.766492  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.766517  104023 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1018 10:11:10.766527  104023 healthz.go:177] healthz check poststarthook/ca-registration failed: not finished
I1018 10:11:10.766535  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/ca-registration failed: reason withheld
healthz check failed
I1018 10:11:10.766560  104023 httplog.go:90] GET /healthz: (1.416774ms) 0 [Go-http-client/1.1 127.0.0.1:60210]
I1018 10:11:10.766580  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.035653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.766503  104023 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication: (961.923µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:10.767856  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (981.184µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60210]
I1018 10:11:10.768159  104023 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.598344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59840]
I1018 10:11:10.768720  104023 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
I1018 10:11:10.768943  104023 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.862308ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:10.769990  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.31464ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60210]
I1018 10:11:10.770421  104023 httplog.go:90] GET /apis/scheduling.k8s.io/v1beta1/priorityclasses/system-cluster-critical: (1.54558ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59840]
I1018 10:11:10.771153  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (717.008µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:10.772263  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (778.349µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:10.772288  104023 httplog.go:90] POST /apis/scheduling.k8s.io/v1beta1/priorityclasses: (1.53963ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59840]
I1018 10:11:10.772455  104023 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
I1018 10:11:10.772467  104023 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
I1018 10:11:10.773256  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (731.084µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:10.774219  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (763.615µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:10.775321  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (864.646µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:10.775513  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.775542  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:10.775569  104023 httplog.go:90] GET /healthz: (753.673µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.776763  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (653.062µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.778805  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.698238ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.779267  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1018 10:11:10.780448  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (964.012µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.782328  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.474778ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.782481  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1018 10:11:10.783286  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (644.814µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.784790  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.141572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.784971  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1018 10:11:10.785641  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (529.024µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.787008  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.151913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.787193  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1018 10:11:10.788039  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (685.592µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.789263  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (970.15µs) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.789473  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1018 10:11:10.790499  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (857.862µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.792307  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.49735ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.792558  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1018 10:11:10.793789  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.097681ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.795576  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.413427ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.795787  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1018 10:11:10.796909  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (732.068µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.798836  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.547643ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.799018  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1018 10:11:10.799826  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (649.898µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.801946  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.806486ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.802604  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1018 10:11:10.803557  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (631.618µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.807664  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.74049ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.807922  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1018 10:11:10.811266  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (3.085862ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.813483  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.692982ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.813792  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1018 10:11:10.819012  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (4.746685ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.822680  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.022621ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.822996  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1018 10:11:10.824188  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.007496ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.826010  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.429782ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.826186  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1018 10:11:10.827028  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (677.649µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.828823  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.496651ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.829140  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1018 10:11:10.830108  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (798.463µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.831930  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.431626ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.832099  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1018 10:11:10.833210  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (885.205µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.834964  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.436842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.835251  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1018 10:11:10.836203  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (768.713µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.837940  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.348214ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.838216  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1018 10:11:10.839679  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.106374ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.841676  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.5805ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.842016  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1018 10:11:10.842954  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (659.481µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.844477  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.159151ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.844663  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1018 10:11:10.845693  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (766.959µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.847875  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.539913ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.848137  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1018 10:11:10.849278  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-attacher: (937.754µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.851452  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.664521ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.851702  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-attacher
I1018 10:11:10.853185  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.142724ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.855210  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.597661ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.855435  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1018 10:11:10.856815  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.102408ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.859067  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.787362ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.859352  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1018 10:11:10.860488  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (895.885µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.864473  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.923994ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.864700  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1018 10:11:10.866240  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.866265  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:10.866268  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.155236ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:10.866292  104023 httplog.go:90] GET /healthz: (1.36218ms) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:10.868590  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.980493ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.868774  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1018 10:11:10.869871  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (888.453µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.873011  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.660974ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.873308  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1018 10:11:10.874209  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:csi-external-provisioner: (721.019µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.875875  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.875994  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:10.876161  104023 httplog.go:90] GET /healthz: (1.043427ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:10.877353  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.586226ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.877630  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:csi-external-provisioner
I1018 10:11:10.878658  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (774.391µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.880574  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.42905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.880729  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1018 10:11:10.881655  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (702.005µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.883440  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.282727ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.883649  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1018 10:11:10.885160  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.339639ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.886682  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.136978ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.886943  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1018 10:11:10.887850  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (733.618µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.889648  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.242923ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.889862  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1018 10:11:10.891148  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (898.048µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.893016  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.361667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.893264  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1018 10:11:10.894293  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (761.878µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.896116  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.31928ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.896985  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1018 10:11:10.898954  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.740996ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.900784  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.382953ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.901030  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1018 10:11:10.902144  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (861.781µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.905226  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.623608ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.906005  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1018 10:11:10.909142  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (2.685479ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.911017  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.344579ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.911286  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1018 10:11:10.913157  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (1.420912ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.915242  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.687008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.915535  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1018 10:11:10.917814  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.244711ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.920571  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.214941ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.920762  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1018 10:11:10.921764  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (705.487µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.923811  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.576077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.924016  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1018 10:11:10.924994  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (682.15µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.926703  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.404084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.926857  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1018 10:11:10.928025  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (863.961µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.929810  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.518272ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.930068  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1018 10:11:10.931143  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (811.663µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.932595  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.0434ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.932729  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1018 10:11:10.933578  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (675.172µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.935254  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.413281ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.935429  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1018 10:11:10.936392  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (761.672µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.939168  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.544315ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.939413  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1018 10:11:10.940703  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (971.722µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.942466  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.235562ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.942754  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1018 10:11:10.944217  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (900.42µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.946224  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.531559ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.946416  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1018 10:11:10.947670  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.013079ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.949687  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.409494ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.949928  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1018 10:11:10.951017  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (773.77µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.952703  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.275584ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.952903  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1018 10:11:10.953980  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (913.32µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.956172  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.801088ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.956630  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1018 10:11:10.958578  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.588782ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.960547  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.683254ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.961042  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1018 10:11:10.964530  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.037294ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:10.965784  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.965858  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:10.966154  104023 httplog.go:90] GET /healthz: (1.401011ms) 0 [Go-http-client/1.1 127.0.0.1:60208]
I1018 10:11:10.975762  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:10.975932  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:10.976167  104023 httplog.go:90] GET /healthz: (1.253856ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:10.985676  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.119302ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:10.985987  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1018 10:11:11.007579  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.856433ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.025863  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.019756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.026291  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1018 10:11:11.045069  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.364752ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.065278  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.658416ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.065746  104023 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1018 10:11:11.066274  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.066444  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.066668  104023 httplog.go:90] GET /healthz: (1.934029ms) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:11.076191  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.076220  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.076279  104023 httplog.go:90] GET /healthz: (1.173282ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.084916  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (1.383266ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.109534  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.838478ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.110351  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1018 10:11:11.124670  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.130853ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.145556  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.899241ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.146150  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1018 10:11:11.165097  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.500336ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.165867  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.165906  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.165947  104023 httplog.go:90] GET /healthz: (1.156159ms) 0 [Go-http-client/1.1 127.0.0.1:60208]
I1018 10:11:11.175816  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.175846  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.176057  104023 httplog.go:90] GET /healthz: (958.616µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.185324  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.763074ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.185538  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1018 10:11:11.205905  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (2.110292ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.225369  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.737944ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.225619  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1018 10:11:11.245084  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.358907ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.266111  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.266135  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.485164ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.266148  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.266180  104023 httplog.go:90] GET /healthz: (1.38304ms) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:11.266556  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1018 10:11:11.275785  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.275817  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.275852  104023 httplog.go:90] GET /healthz: (843.594µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.285278  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.677283ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.309532  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.076459ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.309835  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1018 10:11:11.324782  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.141345ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.345767  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.142984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.346025  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1018 10:11:11.365080  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (1.442855ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.365708  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.365739  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.365795  104023 httplog.go:90] GET /healthz: (986.019µs) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:11.375950  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.376166  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.376360  104023 httplog.go:90] GET /healthz: (1.399672ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.386131  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.587159ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.386668  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1018 10:11:11.406611  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (2.9925ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.425869  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.263935ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.426259  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1018 10:11:11.445441  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.734669ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.466393  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.466417  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.466448  104023 httplog.go:90] GET /healthz: (1.199028ms) 0 [Go-http-client/1.1 127.0.0.1:60208]
I1018 10:11:11.466904  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.323641ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.467314  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1018 10:11:11.476136  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.476178  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.476215  104023 httplog.go:90] GET /healthz: (1.139484ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.484704  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (1.068588ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.506248  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.7072ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.506467  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1018 10:11:11.525186  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.485684ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.546106  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.533516ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.546450  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1018 10:11:11.564596  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.021846ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.565610  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.565640  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.565731  104023 httplog.go:90] GET /healthz: (798.412µs) 0 [Go-http-client/1.1 127.0.0.1:60208]
I1018 10:11:11.576564  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.576701  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.576899  104023 httplog.go:90] GET /healthz: (1.926309ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.585668  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.064275ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.585982  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1018 10:11:11.605378  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.181507ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.625514  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.901846ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.625728  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1018 10:11:11.644865  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.268795ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.665571  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.977822ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.665672  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.665702  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.665732  104023 httplog.go:90] GET /healthz: (961.69µs) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:11.665797  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1018 10:11:11.675728  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.675757  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.675786  104023 httplog.go:90] GET /healthz: (811.763µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.684440  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (945.693µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.706523  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.562965ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.706850  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1018 10:11:11.724873  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.187612ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.745592  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.991563ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.746392  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1018 10:11:11.764830  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.208423ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.766097  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.766120  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.766145  104023 httplog.go:90] GET /healthz: (1.018232ms) 0 [Go-http-client/1.1 127.0.0.1:60208]
I1018 10:11:11.775732  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.775772  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.775803  104023 httplog.go:90] GET /healthz: (856.912µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.786404  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.816755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.786581  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1018 10:11:11.805207  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.60485ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.826040  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.241518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.826359  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1018 10:11:11.844932  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (1.252825ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.865507  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.865786  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.866078  104023 httplog.go:90] GET /healthz: (1.33121ms) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:11.866601  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.968527ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.867013  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1018 10:11:11.876492  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.876517  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.876546  104023 httplog.go:90] GET /healthz: (1.668193ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.884637  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (996.236µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.906694  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.687572ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.907288  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1018 10:11:11.925069  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.175117ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.945529  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.875635ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.945782  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1018 10:11:11.964692  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.147115ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:11.965512  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.965540  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.965572  104023 httplog.go:90] GET /healthz: (849.184µs) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:11.978295  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:11.978325  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:11.978363  104023 httplog.go:90] GET /healthz: (3.478693ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.986352  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.754193ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:11.986620  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1018 10:11:12.009198  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (4.513187ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.026200  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.236688ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.026602  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1018 10:11:12.044777  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.14086ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.065955  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.065987  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.066053  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.423259ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.066062  104023 httplog.go:90] GET /healthz: (1.018771ms) 0 [Go-http-client/1.1 127.0.0.1:60208]
I1018 10:11:12.066248  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1018 10:11:12.075740  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.075776  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.075809  104023 httplog.go:90] GET /healthz: (833.283µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.084642  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.061634ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.109914  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (5.012454ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.110491  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1018 10:11:12.124564  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (947.619µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.145861  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.194827ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.147122  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1018 10:11:12.164847  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (1.19088ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.165753  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.165782  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.165814  104023 httplog.go:90] GET /healthz: (1.036738ms) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:12.175703  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.176026  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.176252  104023 httplog.go:90] GET /healthz: (1.38737ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.188017  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.386844ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.188271  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1018 10:11:12.205646  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (2.056122ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.225450  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.864607ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.225687  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1018 10:11:12.244743  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.104402ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.265661  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.97496ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.266187  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1018 10:11:12.266278  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.266298  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.266325  104023 httplog.go:90] GET /healthz: (914.113µs) 0 [Go-http-client/1.1 127.0.0.1:60208]
I1018 10:11:12.275667  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.275701  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.275735  104023 httplog.go:90] GET /healthz: (810.244µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.284702  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.146528ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.307812  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (4.23019ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.308069  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1018 10:11:12.324691  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.022518ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.345732  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.044145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.345973  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1018 10:11:12.364588  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (1.015106ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.365640  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.365726  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.365832  104023 httplog.go:90] GET /healthz: (1.020778ms) 0 [Go-http-client/1.1 127.0.0.1:60208]
I1018 10:11:12.375869  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.376033  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.376198  104023 httplog.go:90] GET /healthz: (1.320408ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.386401  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.720804ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.386627  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1018 10:11:12.406277  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.352509ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.425288  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.66675ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.425511  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1018 10:11:12.444939  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.383988ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.465776  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.205585ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.466011  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1018 10:11:12.466379  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.466428  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.466461  104023 httplog.go:90] GET /healthz: (1.664076ms) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:12.475771  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.475802  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.475850  104023 httplog.go:90] GET /healthz: (892.362µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.484898  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.189153ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.505713  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.034638ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.505940  104023 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1018 10:11:12.524772  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (1.164905ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.526529  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.026749ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.545555  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.023951ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.545862  104023 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1018 10:11:12.564855  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.138038ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.566379  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.566401  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.566434  104023 httplog.go:90] GET /healthz: (1.622507ms) 0 [Go-http-client/1.1 127.0.0.1:60208]
I1018 10:11:12.567231  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.837665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.575684  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.575708  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.575737  104023 httplog.go:90] GET /healthz: (875.218µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.585448  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (1.654105ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.585614  104023 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1018 10:11:12.605114  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.413345ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.607162  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.539723ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.631590  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (8.009599ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.631797  104023 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1018 10:11:12.644496  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (912.33µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.646384  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (993.916µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.666037  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.469663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:59844]
I1018 10:11:12.666390  104023 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1018 10:11:12.666836  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.666861  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.666908  104023 httplog.go:90] GET /healthz: (1.789643ms) 0 [Go-http-client/1.1 127.0.0.1:60208]
I1018 10:11:12.675556  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.675716  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.675863  104023 httplog.go:90] GET /healthz: (1.004336ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.685692  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (2.179993ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.687425  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.178887ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.709455  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (5.822443ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.709675  104023 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1018 10:11:12.724484  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (967.383µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.726686  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.780412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.746347  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.691946ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.746696  104023 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1018 10:11:12.764758  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (934.638µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.765564  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.765595  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.765627  104023 httplog.go:90] GET /healthz: (855.726µs) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:12.766562  104023 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.236573ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.775555  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.775581  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.775637  104023 httplog.go:90] GET /healthz: (754.952µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.785550  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (1.904041ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.785751  104023 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1018 10:11:12.805822  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (2.224785ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.807449  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.126454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.825482  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.824127ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.825772  104023 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1018 10:11:12.844734  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.016565ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.846646  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.25343ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.866939  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.866965  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.867003  104023 httplog.go:90] GET /healthz: (2.174545ms) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:12.869319  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (5.696769ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.869681  104023 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1018 10:11:12.875509  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.875536  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.875564  104023 httplog.go:90] GET /healthz: (682.828µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.885707  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (1.874166ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.888122  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.01271ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.907530  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.487931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.908060  104023 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1018 10:11:12.926863  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (3.289931ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.930185  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.112933ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.945781  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.175005ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.946036  104023 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
E1018 10:11:12.964621  104023 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:35049/apis/events.k8s.io/v1beta1/namespaces/permit-plugin696d5749-5810-4fd1-8173-3fc19922df9e/events: dial tcp 127.0.0.1:35049: connect: connection refused' (may retry after sleeping)
I1018 10:11:12.964736  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.152208ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
E1018 10:11:12.964772  104023 event_broadcaster.go:197] Unable to write event '&v1beta1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"test-pod.15ceb5ec2808c167", GenerateName:"", Namespace:"permit-plugin696d5749-5810-4fd1-8173-3fc19922df9e", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xbf6281b074e53cf8, ext:51334771416, loc:(*time.Location)(0xa8efa80)}}, Series:(*v1beta1.EventSeries)(nil), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-cd6255eae05f", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"permit-plugin696d5749-5810-4fd1-8173-3fc19922df9e", Name:"test-pod", UID:"e9db51a2-7303-4dcb-a547-6babe479580c", APIVersion:"v1", ResourceVersion:"29155", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"pod \"test-pod\" rejected due to timeout after waiting 3s at permit", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"default-scheduler", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}' (retry limit exceeded!)
I1018 10:11:12.966898  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.966922  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.966954  104023 httplog.go:90] GET /healthz: (1.990723ms) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:12.968062  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.487586ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.979860  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:12.980035  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:12.980082  104023 httplog.go:90] GET /healthz: (2.821391ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.994296  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (7.121431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:12.994519  104023 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1018 10:11:13.016633  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (13.068039ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.020536  104023 httplog.go:90] GET /api/v1/namespaces/kube-system: (3.407222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.025437  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (1.785361ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.028644  104023 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1018 10:11:13.045167  104023 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.587225ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.046544  104023 httplog.go:90] GET /api/v1/namespaces/kube-public: (998.906µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.065349  104023 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1018 10:11:13.065370  104023 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/ca-registration ok
healthz check failed
I1018 10:11:13.065394  104023 httplog.go:90] GET /healthz: (652.998µs) 0 [Go-http-client/1.1 127.0.0.1:59844]
I1018 10:11:13.065783  104023 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (2.206561ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.066182  104023 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1018 10:11:13.077225  104023 httplog.go:90] GET /healthz: (2.111631ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.085211  104023 httplog.go:90] GET /api/v1/namespaces/default: (6.554768ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.098468  104023 httplog.go:90] POST /api/v1/namespaces: (12.978265ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.099904  104023 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.068083ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.118492  104023 httplog.go:90] POST /api/v1/namespaces/default/services: (17.98875ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.119919  104023 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.078165ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.120982  104023 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (706.903µs) 422 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
E1018 10:11:13.121150  104023 controller.go:227] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: [subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address, (e.g. 10.9.8.7), subsets[0].addresses[0].ip: Invalid value: "<nil>": must be a valid IP address]
I1018 10:11:13.167199  104023 httplog.go:90] GET /healthz: (2.322398ms) 200 [Go-http-client/1.1 127.0.0.1:60208]
I1018 10:11:13.170402  104023 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.02031ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
W1018 10:11:13.170726  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:11:13.170809  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:11:13.170832  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:11:13.171011  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:11:13.171057  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:11:13.171073  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:11:13.171085  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:11:13.171099  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:11:13.171115  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:11:13.171152  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1018 10:11:13.171164  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 10:11:13.172476  104023 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-0: (1.127233ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.172713  104023 factory.go:291] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I1018 10:11:13.172739  104023 factory.go:308] Registering predicate: PredicateOne
I1018 10:11:13.172747  104023 algorithm_factory.go:288] Predicate type PredicateOne already registered, reusing.
I1018 10:11:13.172752  104023 factory.go:308] Registering predicate: PredicateTwo
I1018 10:11:13.172756  104023 algorithm_factory.go:288] Predicate type PredicateTwo already registered, reusing.
I1018 10:11:13.172761  104023 factory.go:323] Registering priority: PriorityOne
I1018 10:11:13.172766  104023 algorithm_factory.go:399] Priority type PriorityOne already registered, reusing.
I1018 10:11:13.172774  104023 factory.go:323] Registering priority: PriorityTwo
I1018 10:11:13.172777  104023 algorithm_factory.go:399] Priority type PriorityTwo already registered, reusing.
I1018 10:11:13.172783  104023 factory.go:369] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I1018 10:11:13.174633  104023 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.485387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
W1018 10:11:13.174951  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 10:11:13.176257  104023 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-1: (864.281µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.176448  104023 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1018 10:11:13.176471  104023 factory.go:300] Using predicates from algorithm provider 'DefaultProvider'
I1018 10:11:13.176480  104023 factory.go:315] Using priorities from algorithm provider 'DefaultProvider'
I1018 10:11:13.176484  104023 factory.go:369] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1018 10:11:13.178336  104023 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.221268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
W1018 10:11:13.178709  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 10:11:13.180455  104023 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-2: (1.124655ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.180662  104023 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1018 10:11:13.180692  104023 factory.go:369] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I1018 10:11:13.183645  104023 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (2.425747ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
W1018 10:11:13.183940  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 10:11:13.185626  104023 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-3: (1.37483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.186128  104023 factory.go:291] Creating scheduler from configuration: {{ } [{PredicateOne <nil>} {PredicateTwo <nil>}] [{PriorityOne 1 <nil>} {PriorityTwo 5 <nil>}] [] 0 false}
I1018 10:11:13.186286  104023 factory.go:308] Registering predicate: PredicateOne
I1018 10:11:13.186357  104023 algorithm_factory.go:288] Predicate type PredicateOne already registered, reusing.
I1018 10:11:13.186414  104023 factory.go:308] Registering predicate: PredicateTwo
I1018 10:11:13.186472  104023 algorithm_factory.go:288] Predicate type PredicateTwo already registered, reusing.
I1018 10:11:13.186524  104023 factory.go:323] Registering priority: PriorityOne
I1018 10:11:13.186583  104023 algorithm_factory.go:399] Priority type PriorityOne already registered, reusing.
I1018 10:11:13.186648  104023 factory.go:323] Registering priority: PriorityTwo
I1018 10:11:13.186707  104023 algorithm_factory.go:399] Priority type PriorityTwo already registered, reusing.
I1018 10:11:13.186762  104023 factory.go:369] Creating scheduler with fit predicates 'map[PredicateOne:{} PredicateTwo:{}]' and priority functions 'map[PriorityOne:{} PriorityTwo:{}]'
I1018 10:11:13.188603  104023 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.434268ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
W1018 10:11:13.188831  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 10:11:13.189833  104023 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-4: (733.494µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.190108  104023 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1018 10:11:13.190131  104023 factory.go:300] Using predicates from algorithm provider 'DefaultProvider'
I1018 10:11:13.190139  104023 factory.go:315] Using priorities from algorithm provider 'DefaultProvider'
I1018 10:11:13.190145  104023 factory.go:369] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1018 10:11:13.368452  104023 request.go:538] Throttling request took 177.957995ms, request: POST:http://127.0.0.1:35469/api/v1/namespaces/kube-system/configmaps
I1018 10:11:13.370679  104023 httplog.go:90] POST /api/v1/namespaces/kube-system/configmaps: (1.995077ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
W1018 10:11:13.371350  104023 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1018 10:11:13.568288  104023 request.go:538] Throttling request took 196.759423ms, request: GET:http://127.0.0.1:35469/api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5
I1018 10:11:13.569866  104023 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps/scheduler-custom-policy-config-5: (1.312041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.570249  104023 factory.go:291] Creating scheduler from configuration: {{ } [] [] [] 0 false}
I1018 10:11:13.570270  104023 factory.go:369] Creating scheduler with fit predicates 'map[]' and priority functions 'map[]'
I1018 10:11:13.768322  104023 request.go:538] Throttling request took 197.742219ms, request: DELETE:http://127.0.0.1:35469/api/v1/nodes
I1018 10:11:13.769978  104023 httplog.go:90] DELETE /api/v1/nodes: (1.401178ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
I1018 10:11:13.770186  104023 controller.go:185] Shutting down kubernetes service endpoint reconciler
I1018 10:11:13.772800  104023 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (2.407235ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:60208]
--- FAIL: TestSchedulerCreationFromConfigMap (4.30s)
    scheduler_test.go:312: Expected predicates map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}], got map[CheckNodeUnschedulable:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}]
    scheduler_test.go:312: Expected predicates map[CheckNodeUnschedulable:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}], got map[CheckNodeUnschedulable:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{}]

				from junit_d965d8661547eb73cabe6d94d5550ec333e4c0fa_20191018-100212.xml

Filter through log files | View test history on testgrid


Show 2896 Passed Tests

Show 4 Skipped Tests