This job view page is being replaced by Spyglass soon. Check out the new job view.
PRrobscott: Promoting EndpointSlices to beta
ResultFAILURE
Tests 3 failed / 2895 succeeded
Started2019-11-09 00:01
Elapsed29m42s
Revisiona6d8ee0eaddbf16d212efd907500dfe84739b3bb
Refs 84390

Test Failures


k8s.io/kubernetes/test/integration/etcd TestEtcdStoragePath 15s

go test -v k8s.io/kubernetes/test/integration/etcd -run TestEtcdStoragePath$
=== RUN   TestEtcdStoragePath
E1109 00:23:16.987903  107697 controller.go:183] Get https://127.0.0.1:33005/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:33005: connect: connection refused
I1109 00:23:18.333286  107697 serving.go:306] Generated self-signed cert (/tmp/TestEtcdStoragePath529831514/apiserver.crt, /tmp/TestEtcdStoragePath529831514/apiserver.key)
I1109 00:23:18.333331  107697 server.go:622] external host was not specified, using 10.60.66.225
I1109 00:23:18.333817  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:18.333865  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1109 00:23:19.263856  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.263900  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.263915  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.264141  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.264155  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.264164  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.264174  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.264187  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.265470  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.265527  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.265554  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.265581  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.265865  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.266042  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.266089  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:23:19.266183  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 00:23:19.266206  107697 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1109 00:23:19.266217  107697 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1109 00:23:19.267458  107697 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1109 00:23:19.267476  107697 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I1109 00:23:19.269291  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.269335  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.270701  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.270727  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1109 00:23:19.325952  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 00:23:19.327308  107697 master.go:265] Using reconciler: lease
I1109 00:23:19.327900  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.327938  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.331386  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.331429  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.333484  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.333525  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.335300  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.335345  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.336505  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.336536  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.338301  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.338342  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.339809  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.339855  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.342917  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.342968  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.344552  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.344586  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.346981  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.347027  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.348205  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.348267  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.349541  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.349578  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.351085  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.351122  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.352366  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.352404  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.354198  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.354233  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.356306  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.356345  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.359328  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.359373  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.360506  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.360536  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.361185  107697 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1109 00:23:19.534789  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.534946  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.544084  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.544279  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.545827  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.545872  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.552231  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.552288  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.554031  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.554055  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.556022  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.556072  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.558512  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.558540  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.560035  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.560058  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.561882  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.561923  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.564307  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.564347  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.566084  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.566123  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.569627  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.569670  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.574444  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.574857  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.580528  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.580584  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.584604  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.584653  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.587082  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.587125  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.590632  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.590672  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.592140  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.592175  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.593841  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.593877  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.600926  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.600981  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.603940  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.603984  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.605476  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.605512  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.607646  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.607671  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.609774  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.609811  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.611431  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.611466  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.613607  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.613651  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.615942  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.615992  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.617263  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.617295  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.619799  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.619820  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.621322  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.621367  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.623132  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.623176  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.624541  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.624585  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.626300  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.626334  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.627419  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.627452  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.629552  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.629577  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.635034  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.635086  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.636536  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.636577  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.637991  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.638034  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.639931  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.639973  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.641030  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.641065  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.642350  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.642375  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.644276  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.644323  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.646853  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.646885  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.647832  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.647859  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.653287  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.653328  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.666739  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.666783  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.668487  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.668540  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.671501  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.671544  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.674989  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.675115  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.677719  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.677764  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.679336  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.679369  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.681998  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.682035  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.683536  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.683572  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.685188  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.685330  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.686833  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.686874  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.689104  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.689139  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.690932  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.691048  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.692845  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.692883  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.694148  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.694174  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.697629  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.697666  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.698924  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.698988  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.700167  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.700296  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.701992  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.702027  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.704036  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.704065  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:19.706454  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:19.706487  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1109 00:23:20.100681  107697 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
I1109 00:23:20.292046  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:20.292214  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:20.810799  107697 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I1109 00:23:20.810833  107697 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
W1109 00:23:20.812539  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 00:23:20.812851  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:20.812895  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:23:20.814385  107697 client.go:361] parsed scheme: "endpoint"
I1109 00:23:20.814423  107697 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
W1109 00:23:20.818609  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 00:23:25.853414  107697 secure_serving.go:174] Serving securely on 127.0.0.1:36353
I1109 00:23:25.853423  107697 dynamic_serving_content.go:129] Starting serving-cert::/tmp/TestEtcdStoragePath529831514/apiserver.crt::/tmp/TestEtcdStoragePath529831514/apiserver.key
I1109 00:23:25.853510  107697 available_controller.go:386] Starting AvailableConditionController
I1109 00:23:25.853522  107697 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I1109 00:23:25.853554  107697 tlsconfig.go:220] Starting DynamicServingCertificateController
I1109 00:23:25.854717  107697 autoregister_controller.go:140] Starting autoregister controller
I1109 00:23:25.854743  107697 cache.go:32] Waiting for caches to sync for autoregister controller
W1109 00:23:25.855763  107697 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 00:23:25.855908  107697 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1109 00:23:25.855925  107697 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I1109 00:23:25.856493  107697 crd_finalizer.go:263] Starting CRDFinalizer
I1109 00:23:25.859058  107697 crdregistration_controller.go:111] Starting crd-autoregister controller
I1109 00:23:25.859084  107697 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I1109 00:23:25.859889  107697 controller.go:85] Starting OpenAPI controller
I1109 00:23:25.859925  107697 customresource_discovery_controller.go:208] Starting DiscoveryController
I1109 00:23:25.859951  107697 naming_controller.go:288] Starting NamingConditionController
I1109 00:23:25.859969  107697 establishing_controller.go:73] Starting EstablishingController
I1109 00:23:25.859990  107697 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I1109 00:23:25.860026  107697 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I1109 00:23:25.869685  107697 controller.go:81] Starting OpenAPI AggregationController
E1109 00:23:25.872363  107697 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/10.60.66.225, ResourceVersion: 0, AdditionalErrorMsg: 
I1109 00:23:25.873343  107697 apiservice_controller.go:94] Starting APIServiceRegistrationController
I1109 00:23:25.873361  107697 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I1109 00:23:25.873375  107697 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I1109 00:23:25.953741  107697 cache.go:39] Caches are synced for AvailableConditionController controller
I1109 00:23:25.954995  107697 cache.go:39] Caches are synced for autoregister controller
I1109 00:23:25.956185  107697 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
I1109 00:23:25.959293  107697 shared_informer.go:204] Caches are synced for crd-autoregister 
I1109 00:23:26.853660  107697 controller.go:107] OpenAPI AggregationController: Processing item 
I1109 00:23:26.853693  107697 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I1109 00:23:26.853710  107697 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I1109 00:23:26.864006  107697 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I1109 00:23:26.872032  107697 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I1109 00:23:26.872071  107697 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I1109 00:23:27.526490  107697 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I1109 00:23:27.593937  107697 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W1109 00:23:27.739858  107697 lease.go:222] Resetting endpoints for master service "kubernetes" to [10.60.66.225]
I1109 00:23:27.741544  107697 controller.go:606] quota admission added evaluator for: endpoints
I1109 00:23:27.747749  107697 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
--- FAIL: TestEtcdStoragePath (15.64s)
    server.go:155: waiting for server to be healthy
    server.go:155: waiting for server to be healthy

				from junit_304dbea7698c16157bb4586f231ea1f94495b046_20191109-001841.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions 1m5s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions$
=== RUN   TestTaintBasedEvictions
I1109 00:29:24.265928  110118 feature_gate.go:225] feature gates: &{map[EvenPodsSpread:false TaintBasedEvictions:true]}
--- FAIL: TestTaintBasedEvictions (65.25s)

				from junit_304dbea7698c16157bb4586f231ea1f94495b046_20191109-001841.xml

Filter through log files | View test history on testgrid


k8s.io/kubernetes/test/integration/scheduler TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds 35s

go test -v k8s.io/kubernetes/test/integration/scheduler -run TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds$
=== RUN   TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds
W1109 00:29:24.267677  110118 services.go:37] No CIDR for service cluster IPs specified. Default value which was 10.0.0.0/24 is deprecated and will be removed in future releases. Please specify it using --service-cluster-ip-range on kube-apiserver.
I1109 00:29:24.267697  110118 services.go:51] Setting service IP to "10.0.0.1" (read-write).
I1109 00:29:24.267712  110118 master.go:309] Node port range unspecified. Defaulting to 30000-32767.
I1109 00:29:24.267725  110118 master.go:265] Using reconciler: 
I1109 00:29:24.270415  110118 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.270669  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.270895  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.271821  110118 store.go:1342] Monitoring podtemplates count at <storage-prefix>//podtemplates
I1109 00:29:24.271883  110118 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.272150  110118 reflector.go:188] Listing and watching *core.PodTemplate from storage/cacher.go:/podtemplates
I1109 00:29:24.272183  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.272208  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.273593  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.274992  110118 store.go:1342] Monitoring events count at <storage-prefix>//events
I1109 00:29:24.275065  110118 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.275096  110118 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I1109 00:29:24.275192  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.275208  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.276026  110118 store.go:1342] Monitoring limitranges count at <storage-prefix>//limitranges
I1109 00:29:24.276102  110118 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.276271  110118 reflector.go:188] Listing and watching *core.LimitRange from storage/cacher.go:/limitranges
I1109 00:29:24.276278  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.276337  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.276497  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.277834  110118 store.go:1342] Monitoring resourcequotas count at <storage-prefix>//resourcequotas
I1109 00:29:24.277882  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.278031  110118 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.278074  110118 reflector.go:188] Listing and watching *core.ResourceQuota from storage/cacher.go:/resourcequotas
I1109 00:29:24.278164  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.278181  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.279357  110118 store.go:1342] Monitoring secrets count at <storage-prefix>//secrets
I1109 00:29:24.279379  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.279470  110118 reflector.go:188] Listing and watching *core.Secret from storage/cacher.go:/secrets
I1109 00:29:24.279564  110118 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.279738  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.279760  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.280335  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.280558  110118 store.go:1342] Monitoring persistentvolumes count at <storage-prefix>//persistentvolumes
I1109 00:29:24.280744  110118 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.280855  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.280874  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.280877  110118 reflector.go:188] Listing and watching *core.PersistentVolume from storage/cacher.go:/persistentvolumes
I1109 00:29:24.281751  110118 store.go:1342] Monitoring persistentvolumeclaims count at <storage-prefix>//persistentvolumeclaims
I1109 00:29:24.281857  110118 reflector.go:188] Listing and watching *core.PersistentVolumeClaim from storage/cacher.go:/persistentvolumeclaims
I1109 00:29:24.281905  110118 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.282046  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.282063  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.283000  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.283130  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.284059  110118 store.go:1342] Monitoring configmaps count at <storage-prefix>//configmaps
I1109 00:29:24.284112  110118 reflector.go:188] Listing and watching *core.ConfigMap from storage/cacher.go:/configmaps
I1109 00:29:24.284222  110118 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.284339  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.284356  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.285016  110118 store.go:1342] Monitoring namespaces count at <storage-prefix>//namespaces
I1109 00:29:24.285218  110118 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.285291  110118 reflector.go:188] Listing and watching *core.Namespace from storage/cacher.go:/namespaces
I1109 00:29:24.285382  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.285405  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.285800  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.285955  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.287091  110118 store.go:1342] Monitoring endpoints count at <storage-prefix>//services/endpoints
I1109 00:29:24.287186  110118 reflector.go:188] Listing and watching *core.Endpoints from storage/cacher.go:/services/endpoints
I1109 00:29:24.287298  110118 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.287425  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.287445  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.288008  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.288916  110118 store.go:1342] Monitoring nodes count at <storage-prefix>//minions
I1109 00:29:24.289140  110118 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.289351  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.289382  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.289477  110118 reflector.go:188] Listing and watching *core.Node from storage/cacher.go:/minions
I1109 00:29:24.290665  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.291015  110118 store.go:1342] Monitoring pods count at <storage-prefix>//pods
I1109 00:29:24.291218  110118 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.291324  110118 reflector.go:188] Listing and watching *core.Pod from storage/cacher.go:/pods
I1109 00:29:24.291380  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.291403  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.293158  110118 store.go:1342] Monitoring serviceaccounts count at <storage-prefix>//serviceaccounts
I1109 00:29:24.293184  110118 reflector.go:188] Listing and watching *core.ServiceAccount from storage/cacher.go:/serviceaccounts
I1109 00:29:24.293224  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.293380  110118 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.293520  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.293540  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.294735  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.294772  110118 store.go:1342] Monitoring services count at <storage-prefix>//services/specs
I1109 00:29:24.294821  110118 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.294998  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.295017  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.295301  110118 reflector.go:188] Listing and watching *core.Service from storage/cacher.go:/services/specs
I1109 00:29:24.295986  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.296020  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.296348  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.297154  110118 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.297344  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.297378  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.298210  110118 store.go:1342] Monitoring replicationcontrollers count at <storage-prefix>//controllers
I1109 00:29:24.298269  110118 rest.go:115] the default service ipfamily for this cluster is: IPv4
I1109 00:29:24.298299  110118 reflector.go:188] Listing and watching *core.ReplicationController from storage/cacher.go:/controllers
I1109 00:29:24.298702  110118 storage_factory.go:285] storing bindings in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.298877  110118 storage_factory.go:285] storing componentstatuses in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.299638  110118 storage_factory.go:285] storing configmaps in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.300867  110118 storage_factory.go:285] storing endpoints in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.301655  110118 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.302939  110118 storage_factory.go:285] storing limitranges in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.303473  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.304705  110118 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.304948  110118 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.305190  110118 storage_factory.go:285] storing namespaces in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.305747  110118 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.306421  110118 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.306688  110118 storage_factory.go:285] storing nodes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.307482  110118 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.307825  110118 storage_factory.go:285] storing persistentvolumeclaims in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.308366  110118 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.308730  110118 storage_factory.go:285] storing persistentvolumes in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.309463  110118 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.309724  110118 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.310026  110118 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.310226  110118 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.310524  110118 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.310782  110118 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.311335  110118 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.312100  110118 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.312508  110118 storage_factory.go:285] storing pods in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.313354  110118 storage_factory.go:285] storing podtemplates in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.314294  110118 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.314685  110118 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.315000  110118 storage_factory.go:285] storing replicationcontrollers in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.315935  110118 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.316415  110118 storage_factory.go:285] storing resourcequotas in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.317289  110118 storage_factory.go:285] storing secrets in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.318059  110118 storage_factory.go:285] storing serviceaccounts in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.318818  110118 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.319784  110118 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.320116  110118 storage_factory.go:285] storing services in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.320360  110118 master.go:493] Skipping disabled API group "auditregistration.k8s.io".
I1109 00:29:24.320463  110118 master.go:504] Enabling API group "authentication.k8s.io".
I1109 00:29:24.320709  110118 master.go:504] Enabling API group "authorization.k8s.io".
I1109 00:29:24.320932  110118 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.321194  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.321323  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.322356  110118 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1109 00:29:24.322529  110118 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.322616  110118 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1109 00:29:24.322680  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.322714  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.323763  110118 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1109 00:29:24.323898  110118 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.324044  110118 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1109 00:29:24.324271  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.324435  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.324534  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.324924  110118 store.go:1342] Monitoring horizontalpodautoscalers.autoscaling count at <storage-prefix>//horizontalpodautoscalers
I1109 00:29:24.324939  110118 master.go:504] Enabling API group "autoscaling".
I1109 00:29:24.325048  110118 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.325073  110118 reflector.go:188] Listing and watching *autoscaling.HorizontalPodAutoscaler from storage/cacher.go:/horizontalpodautoscalers
I1109 00:29:24.325147  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.325160  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.326320  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.326438  110118 store.go:1342] Monitoring jobs.batch count at <storage-prefix>//jobs
I1109 00:29:24.326656  110118 reflector.go:188] Listing and watching *batch.Job from storage/cacher.go:/jobs
I1109 00:29:24.326871  110118 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.327045  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.327162  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.327718  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.327870  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.327991  110118 store.go:1342] Monitoring cronjobs.batch count at <storage-prefix>//cronjobs
I1109 00:29:24.328008  110118 master.go:504] Enabling API group "batch".
I1109 00:29:24.328094  110118 reflector.go:188] Listing and watching *batch.CronJob from storage/cacher.go:/cronjobs
I1109 00:29:24.328135  110118 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.328235  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.328282  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.329486  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.329570  110118 store.go:1342] Monitoring certificatesigningrequests.certificates.k8s.io count at <storage-prefix>//certificatesigningrequests
I1109 00:29:24.329592  110118 master.go:504] Enabling API group "certificates.k8s.io".
I1109 00:29:24.329767  110118 reflector.go:188] Listing and watching *certificates.CertificateSigningRequest from storage/cacher.go:/certificatesigningrequests
I1109 00:29:24.329774  110118 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.330071  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.330090  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.330724  110118 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1109 00:29:24.330949  110118 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1109 00:29:24.331007  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.331028  110118 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.331146  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.331163  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.331881  110118 store.go:1342] Monitoring leases.coordination.k8s.io count at <storage-prefix>//leases
I1109 00:29:24.331896  110118 master.go:504] Enabling API group "coordination.k8s.io".
I1109 00:29:24.332046  110118 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.332179  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.332202  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.332424  110118 reflector.go:188] Listing and watching *coordination.Lease from storage/cacher.go:/leases
I1109 00:29:24.332664  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.333292  110118 store.go:1342] Monitoring endpointslices.discovery.k8s.io count at <storage-prefix>//endpointslices
I1109 00:29:24.333398  110118 master.go:504] Enabling API group "discovery.k8s.io".
I1109 00:29:24.333326  110118 reflector.go:188] Listing and watching *discovery.EndpointSlice from storage/cacher.go:/endpointslices
I1109 00:29:24.333748  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.333953  110118 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.334231  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.334285  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.334800  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.335217  110118 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1109 00:29:24.335275  110118 master.go:504] Enabling API group "extensions".
I1109 00:29:24.335297  110118 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1109 00:29:24.335471  110118 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.335655  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.335682  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.336201  110118 store.go:1342] Monitoring networkpolicies.networking.k8s.io count at <storage-prefix>//networkpolicies
I1109 00:29:24.336273  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.336361  110118 reflector.go:188] Listing and watching *networking.NetworkPolicy from storage/cacher.go:/networkpolicies
I1109 00:29:24.336393  110118 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.336755  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.336782  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.337910  110118 store.go:1342] Monitoring ingresses.networking.k8s.io count at <storage-prefix>//ingress
I1109 00:29:24.337931  110118 master.go:504] Enabling API group "networking.k8s.io".
I1109 00:29:24.338001  110118 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.338161  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.338180  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.338214  110118 reflector.go:188] Listing and watching *networking.Ingress from storage/cacher.go:/ingress
I1109 00:29:24.338476  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.339157  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.339588  110118 store.go:1342] Monitoring runtimeclasses.node.k8s.io count at <storage-prefix>//runtimeclasses
I1109 00:29:24.339613  110118 master.go:504] Enabling API group "node.k8s.io".
I1109 00:29:24.339646  110118 reflector.go:188] Listing and watching *node.RuntimeClass from storage/cacher.go:/runtimeclasses
I1109 00:29:24.339858  110118 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.340104  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.340134  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.340455  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.341026  110118 store.go:1342] Monitoring poddisruptionbudgets.policy count at <storage-prefix>//poddisruptionbudgets
I1109 00:29:24.341143  110118 reflector.go:188] Listing and watching *policy.PodDisruptionBudget from storage/cacher.go:/poddisruptionbudgets
I1109 00:29:24.341187  110118 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.341357  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.341398  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.342196  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.342586  110118 store.go:1342] Monitoring podsecuritypolicies.policy count at <storage-prefix>//podsecuritypolicy
I1109 00:29:24.342612  110118 master.go:504] Enabling API group "policy".
I1109 00:29:24.342657  110118 reflector.go:188] Listing and watching *policy.PodSecurityPolicy from storage/cacher.go:/podsecuritypolicy
I1109 00:29:24.342665  110118 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.342808  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.342870  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.343927  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.344235  110118 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1109 00:29:24.344315  110118 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1109 00:29:24.344734  110118 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.345136  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.345279  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.345631  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.346598  110118 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1109 00:29:24.346676  110118 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1109 00:29:24.346944  110118 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.347177  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.347305  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.347834  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.348133  110118 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1109 00:29:24.348227  110118 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1109 00:29:24.348408  110118 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.348696  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.348733  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.349624  110118 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1109 00:29:24.349688  110118 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.349724  110118 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1109 00:29:24.349791  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.349804  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.350700  110118 store.go:1342] Monitoring roles.rbac.authorization.k8s.io count at <storage-prefix>//roles
I1109 00:29:24.350859  110118 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.350882  110118 reflector.go:188] Listing and watching *rbac.Role from storage/cacher.go:/roles
I1109 00:29:24.350972  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.350984  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.351877  110118 store.go:1342] Monitoring rolebindings.rbac.authorization.k8s.io count at <storage-prefix>//rolebindings
I1109 00:29:24.352012  110118 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.352039  110118 reflector.go:188] Listing and watching *rbac.RoleBinding from storage/cacher.go:/rolebindings
I1109 00:29:24.352140  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.352157  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.353472  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.353510  110118 store.go:1342] Monitoring clusterroles.rbac.authorization.k8s.io count at <storage-prefix>//clusterroles
I1109 00:29:24.353573  110118 reflector.go:188] Listing and watching *rbac.ClusterRole from storage/cacher.go:/clusterroles
I1109 00:29:24.353582  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.353686  110118 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.353513  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.353831  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.353854  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.354035  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.354780  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.355367  110118 store.go:1342] Monitoring clusterrolebindings.rbac.authorization.k8s.io count at <storage-prefix>//clusterrolebindings
I1109 00:29:24.355399  110118 master.go:504] Enabling API group "rbac.authorization.k8s.io".
I1109 00:29:24.355507  110118 reflector.go:188] Listing and watching *rbac.ClusterRoleBinding from storage/cacher.go:/clusterrolebindings
I1109 00:29:24.356527  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.357775  110118 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.357950  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.357974  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.358658  110118 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1109 00:29:24.358692  110118 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1109 00:29:24.358815  110118 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.358947  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.358970  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.359471  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.359734  110118 store.go:1342] Monitoring priorityclasses.scheduling.k8s.io count at <storage-prefix>//priorityclasses
I1109 00:29:24.359763  110118 master.go:504] Enabling API group "scheduling.k8s.io".
I1109 00:29:24.359808  110118 reflector.go:188] Listing and watching *scheduling.PriorityClass from storage/cacher.go:/priorityclasses
I1109 00:29:24.359883  110118 master.go:493] Skipping disabled API group "settings.k8s.io".
I1109 00:29:24.360285  110118 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.360424  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.360445  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.360836  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.361705  110118 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1109 00:29:24.361949  110118 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.362018  110118 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1109 00:29:24.362120  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.362142  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.363057  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.363556  110118 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1109 00:29:24.363679  110118 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1109 00:29:24.364157  110118 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.364469  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.364657  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.365402  110118 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1109 00:29:24.365481  110118 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.365644  110118 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1109 00:29:24.365684  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.365705  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.366657  110118 store.go:1342] Monitoring csidrivers.storage.k8s.io count at <storage-prefix>//csidrivers
I1109 00:29:24.366711  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.366735  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.366854  110118 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.366996  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.367017  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.367982  110118 reflector.go:188] Listing and watching *storage.CSIDriver from storage/cacher.go:/csidrivers
I1109 00:29:24.368337  110118 store.go:1342] Monitoring storageclasses.storage.k8s.io count at <storage-prefix>//storageclasses
I1109 00:29:24.368416  110118 reflector.go:188] Listing and watching *storage.StorageClass from storage/cacher.go:/storageclasses
I1109 00:29:24.368778  110118 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.368946  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.368977  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.369411  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.369749  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.370069  110118 store.go:1342] Monitoring volumeattachments.storage.k8s.io count at <storage-prefix>//volumeattachments
I1109 00:29:24.370110  110118 reflector.go:188] Listing and watching *storage.VolumeAttachment from storage/cacher.go:/volumeattachments
I1109 00:29:24.370160  110118 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.370329  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.370366  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.370911  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.372196  110118 store.go:1342] Monitoring csinodes.storage.k8s.io count at <storage-prefix>//csinodes
I1109 00:29:24.372219  110118 master.go:504] Enabling API group "storage.k8s.io".
I1109 00:29:24.372275  110118 reflector.go:188] Listing and watching *storage.CSINode from storage/cacher.go:/csinodes
I1109 00:29:24.372472  110118 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.372777  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.372806  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.373704  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.374134  110118 store.go:1342] Monitoring deployments.apps count at <storage-prefix>//deployments
I1109 00:29:24.374221  110118 reflector.go:188] Listing and watching *apps.Deployment from storage/cacher.go:/deployments
I1109 00:29:24.374327  110118 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.374444  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.374462  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.375371  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.375974  110118 store.go:1342] Monitoring statefulsets.apps count at <storage-prefix>//statefulsets
I1109 00:29:24.376166  110118 reflector.go:188] Listing and watching *apps.StatefulSet from storage/cacher.go:/statefulsets
I1109 00:29:24.377040  110118 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.377130  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.377261  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.377297  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.378812  110118 store.go:1342] Monitoring daemonsets.apps count at <storage-prefix>//daemonsets
I1109 00:29:24.378904  110118 reflector.go:188] Listing and watching *apps.DaemonSet from storage/cacher.go:/daemonsets
I1109 00:29:24.379023  110118 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.379933  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.380447  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.380489  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.381686  110118 store.go:1342] Monitoring replicasets.apps count at <storage-prefix>//replicasets
I1109 00:29:24.381722  110118 reflector.go:188] Listing and watching *apps.ReplicaSet from storage/cacher.go:/replicasets
I1109 00:29:24.381940  110118 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.382535  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.382765  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.382794  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.384020  110118 store.go:1342] Monitoring controllerrevisions.apps count at <storage-prefix>//controllerrevisions
I1109 00:29:24.384049  110118 master.go:504] Enabling API group "apps".
I1109 00:29:24.384085  110118 reflector.go:188] Listing and watching *apps.ControllerRevision from storage/cacher.go:/controllerrevisions
I1109 00:29:24.384104  110118 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.384205  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.384220  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.385177  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.385510  110118 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1109 00:29:24.385560  110118 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.385614  110118 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1109 00:29:24.385668  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.385680  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.387084  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.387259  110118 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1109 00:29:24.387312  110118 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1109 00:29:24.387312  110118 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.387424  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.387439  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.388141  110118 store.go:1342] Monitoring validatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//validatingwebhookconfigurations
I1109 00:29:24.388212  110118 reflector.go:188] Listing and watching *admissionregistration.ValidatingWebhookConfiguration from storage/cacher.go:/validatingwebhookconfigurations
I1109 00:29:24.388339  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.388213  110118 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.388573  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.388597  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.389499  110118 store.go:1342] Monitoring mutatingwebhookconfigurations.admissionregistration.k8s.io count at <storage-prefix>//mutatingwebhookconfigurations
I1109 00:29:24.389517  110118 master.go:504] Enabling API group "admissionregistration.k8s.io".
I1109 00:29:24.389556  110118 reflector.go:188] Listing and watching *admissionregistration.MutatingWebhookConfiguration from storage/cacher.go:/mutatingwebhookconfigurations
I1109 00:29:24.389561  110118 storage_factory.go:285] storing events in v1, reading as __internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.389611  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.389861  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:24.389890  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:24.390659  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.391074  110118 store.go:1342] Monitoring events count at <storage-prefix>//events
I1109 00:29:24.391102  110118 master.go:504] Enabling API group "events.k8s.io".
I1109 00:29:24.391157  110118 reflector.go:188] Listing and watching *core.Event from storage/cacher.go:/events
I1109 00:29:24.391498  110118 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.391749  110118 storage_factory.go:285] storing tokenreviews.authentication.k8s.io in authentication.k8s.io/v1, reading as authentication.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.392081  110118 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.392233  110118 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.392416  110118 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.392428  110118 watch_cache.go:409] Replace watchCache (rev: 56023) 
I1109 00:29:24.392545  110118 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.392773  110118 storage_factory.go:285] storing localsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.392938  110118 storage_factory.go:285] storing selfsubjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.393175  110118 storage_factory.go:285] storing selfsubjectrulesreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.393436  110118 storage_factory.go:285] storing subjectaccessreviews.authorization.k8s.io in authorization.k8s.io/v1, reading as authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.394428  110118 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.394824  110118 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.395584  110118 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.396013  110118 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.396849  110118 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.397194  110118 storage_factory.go:285] storing horizontalpodautoscalers.autoscaling in autoscaling/v1, reading as autoscaling/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.397989  110118 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.398425  110118 storage_factory.go:285] storing jobs.batch in batch/v1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.399118  110118 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.399456  110118 storage_factory.go:285] storing cronjobs.batch in batch/v1beta1, reading as batch/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:29:24.399614  110118 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
I1109 00:29:24.400234  110118 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.401387  110118 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.401699  110118 storage_factory.go:285] storing certificatesigningrequests.certificates.k8s.io in certificates.k8s.io/v1beta1, reading as certificates.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.402453  110118 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.403216  110118 storage_factory.go:285] storing leases.coordination.k8s.io in coordination.k8s.io/v1beta1, reading as coordination.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.403910  110118 storage_factory.go:285] storing endpointslices.discovery.k8s.io in discovery.k8s.io/v1beta1, reading as discovery.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:29:24.404050  110118 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
I1109 00:29:24.404773  110118 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.405261  110118 storage_factory.go:285] storing ingresses.extensions in extensions/v1beta1, reading as extensions/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.405969  110118 storage_factory.go:285] storing networkpolicies.networking.k8s.io in networking.k8s.io/v1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.406633  110118 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.406923  110118 storage_factory.go:285] storing ingresses.networking.k8s.io in networking.k8s.io/v1beta1, reading as networking.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.407614  110118 storage_factory.go:285] storing runtimeclasses.node.k8s.io in node.k8s.io/v1beta1, reading as node.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:29:24.407701  110118 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
I1109 00:29:24.408439  110118 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.408747  110118 storage_factory.go:285] storing poddisruptionbudgets.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.409399  110118 storage_factory.go:285] storing podsecuritypolicies.policy in policy/v1beta1, reading as policy/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.410184  110118 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.410734  110118 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.411395  110118 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.412234  110118 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.412934  110118 storage_factory.go:285] storing clusterrolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.413442  110118 storage_factory.go:285] storing clusterroles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.414127  110118 storage_factory.go:285] storing rolebindings.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.414811  110118 storage_factory.go:285] storing roles.rbac.authorization.k8s.io in rbac.authorization.k8s.io/v1, reading as rbac.authorization.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:29:24.414902  110118 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
I1109 00:29:24.415490  110118 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.416840  110118 storage_factory.go:285] storing priorityclasses.scheduling.k8s.io in scheduling.k8s.io/v1, reading as scheduling.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:29:24.417025  110118 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
I1109 00:29:24.417702  110118 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.418348  110118 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.419026  110118 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.419464  110118 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.420295  110118 storage_factory.go:285] storing csidrivers.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.420877  110118 storage_factory.go:285] storing csinodes.storage.k8s.io in storage.k8s.io/v1beta1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.421483  110118 storage_factory.go:285] storing storageclasses.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.422074  110118 storage_factory.go:285] storing volumeattachments.storage.k8s.io in storage.k8s.io/v1, reading as storage.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:29:24.422296  110118 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
I1109 00:29:24.423074  110118 storage_factory.go:285] storing controllerrevisions.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.423718  110118 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.424030  110118 storage_factory.go:285] storing daemonsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.424713  110118 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.425070  110118 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.425409  110118 storage_factory.go:285] storing deployments.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.426224  110118 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.426660  110118 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.426939  110118 storage_factory.go:285] storing replicasets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.427636  110118 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.427976  110118 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.428321  110118 storage_factory.go:285] storing statefulsets.apps in apps/v1, reading as apps/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
W1109 00:29:24.428471  110118 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W1109 00:29:24.428532  110118 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I1109 00:29:24.429144  110118 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.429862  110118 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.430664  110118 storage_factory.go:285] storing mutatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.431463  110118 storage_factory.go:285] storing validatingwebhookconfigurations.admissionregistration.k8s.io in admissionregistration.k8s.io/v1beta1, reading as admissionregistration.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.432346  110118 storage_factory.go:285] storing events.events.k8s.io in events.k8s.io/v1beta1, reading as events.k8s.io/__internal from storagebackend.Config{Type:"", Prefix:"7f56772b-fbf6-4403-ae70-b2541a215198", Transport:storagebackend.TransportConfig{ServerList:[]string{"http://127.0.0.1:2379"}, KeyFile:"", CertFile:"", TrustedCAFile:"", EgressLookup:(egressselector.Lookup)(nil)}, Paging:true, Codec:runtime.Codec(nil), EncodeVersioner:runtime.GroupVersioner(nil), Transformer:value.Transformer(nil), CompactionInterval:300000000000, CountMetricPollPeriod:60000000000}
I1109 00:29:24.436052  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:24.436094  110118 healthz.go:177] healthz check poststarthook/bootstrap-controller failed: not finished
I1109 00:29:24.436106  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:24.436117  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:24.436125  110118 healthz.go:177] healthz check poststarthook/start-cluster-authentication-info-controller failed: not finished
I1109 00:29:24.436134  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[-]poststarthook/start-cluster-authentication-info-controller failed: reason withheld
healthz check failed
I1109 00:29:24.436171  110118 httplog.go:90] GET /healthz: (291.314µs) 0 [Go-http-client/1.1 127.0.0.1:41616]
I1109 00:29:24.437841  110118 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.951359ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
W1109 00:29:24.438880  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 00:29:24.439094  110118 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I1109 00:29:24.439179  110118 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I1109 00:29:24.439447  110118 reflector.go:153] Starting reflector *v1.ConfigMap (12h0m0s) from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I1109 00:29:24.439480  110118 reflector.go:188] Listing and watching *v1.ConfigMap from k8s.io/kubernetes/pkg/master/controller/clusterauthenticationtrust/cluster_authentication_trust_controller.go:444
I1109 00:29:24.440441  110118 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?limit=500&resourceVersion=0: (691.864µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
I1109 00:29:24.441158  110118 httplog.go:90] GET /api/v1/services: (1.163908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41616]
I1109 00:29:24.441590  110118 get.go:251] Starting watch for /api/v1/namespaces/kube-system/configmaps, rv=56023 labels= fields= timeout=8m11s
I1109 00:29:24.445823  110118 httplog.go:90] GET /api/v1/services: (1.104895ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41616]
I1109 00:29:24.449177  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:24.449213  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:24.449226  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:24.449235  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:24.449342  110118 httplog.go:90] GET /healthz: (311.184µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41616]
I1109 00:29:24.451078  110118 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.666341ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I1109 00:29:24.451095  110118 httplog.go:90] GET /api/v1/services: (1.482025ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41616]
I1109 00:29:24.452321  110118 httplog.go:90] GET /api/v1/services: (2.75218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:24.455342  110118 httplog.go:90] POST /api/v1/namespaces: (3.762972ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41620]
I1109 00:29:24.456931  110118 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.220723ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:24.458891  110118 httplog.go:90] POST /api/v1/namespaces: (1.535208ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:24.460433  110118 httplog.go:90] GET /api/v1/namespaces/kube-node-lease: (1.00586ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:24.462727  110118 httplog.go:90] POST /api/v1/namespaces: (1.899436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:24.537065  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:24.537104  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:24.537116  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:24.537129  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:24.537158  110118 httplog.go:90] GET /healthz: (240.982µs) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:24.539434  110118 shared_informer.go:227] caches populated
I1109 00:29:24.539541  110118 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller 
I1109 00:29:24.550052  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:24.550262  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:24.550335  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:24.550377  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:24.550518  110118 httplog.go:90] GET /healthz: (597.34µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:24.637016  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:24.637057  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:24.637071  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:24.637081  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:24.637114  110118 httplog.go:90] GET /healthz: (258.624µs) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:24.650192  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:24.650253  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:24.650266  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:24.650275  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:24.650332  110118 httplog.go:90] GET /healthz: (322.393µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:24.737501  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:24.737565  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:24.737600  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:24.737634  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:24.737724  110118 httplog.go:90] GET /healthz: (546.744µs) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:24.750380  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:24.750481  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:24.750529  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:24.750567  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:24.750653  110118 httplog.go:90] GET /healthz: (569.467µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:24.837078  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:24.837121  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:24.837135  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:24.837145  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:24.837188  110118 httplog.go:90] GET /healthz: (359.409µs) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:24.850236  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:24.850314  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:24.850326  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:24.850334  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:24.850366  110118 httplog.go:90] GET /healthz: (291.452µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:24.937052  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:24.937090  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:24.937115  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:24.937124  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:24.937153  110118 httplog.go:90] GET /healthz: (257.765µs) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:24.950072  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:24.950108  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:24.950121  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:24.950130  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:24.950160  110118 httplog.go:90] GET /healthz: (236.331µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.037080  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:25.037123  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.037138  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:25.037149  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.037188  110118 httplog.go:90] GET /healthz: (322.965µs) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:25.050226  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:25.050301  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.050315  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:25.050325  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.050367  110118 httplog.go:90] GET /healthz: (327.702µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
E1109 00:29:25.066018  110118 factory.go:682] Error getting pod permit-plugins9bb06b34-0385-4607-88a7-8da8fb12ea45/test-pod for retry: Get http://127.0.0.1:46651/api/v1/namespaces/permit-plugins9bb06b34-0385-4607-88a7-8da8fb12ea45/pods/test-pod: dial tcp 127.0.0.1:46651: connect: connection refused; retrying...
I1109 00:29:25.136993  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:25.137067  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.137080  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:25.137103  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.137152  110118 httplog.go:90] GET /healthz: (323.679µs) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:25.150346  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:25.150395  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.150414  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:25.150425  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.150463  110118 httplog.go:90] GET /healthz: (330.479µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.237062  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:25.237105  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.237132  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:25.237151  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.237193  110118 httplog.go:90] GET /healthz: (304.701µs) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:25.250355  110118 healthz.go:177] healthz check etcd failed: etcd client connection not yet established
I1109 00:29:25.250399  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.250413  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:25.250422  110118 healthz.go:191] [+]ping ok
[+]log ok
[-]etcd failed: reason withheld
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.250460  110118 httplog.go:90] GET /healthz: (369.93µs) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.266808  110118 client.go:361] parsed scheme: "endpoint"
I1109 00:29:25.266932  110118 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0  <nil>}]
I1109 00:29:25.338449  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.338489  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:25.338502  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.338587  110118 httplog.go:90] GET /healthz: (1.661587ms) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:25.351358  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.351403  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:25.351415  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.351466  110118 httplog.go:90] GET /healthz: (1.517027ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.437601  110118 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical: (1.59191ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.437823  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.767716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41624]
I1109 00:29:25.439459  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.439502  110118 healthz.go:177] healthz check poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
I1109 00:29:25.439514  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.439563  110118 httplog.go:90] GET /healthz: (1.898464ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I1109 00:29:25.439580  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (1.377452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41624]
I1109 00:29:25.440270  110118 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (2.251483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.440724  110118 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I1109 00:29:25.441177  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.114235ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41624]
I1109 00:29:25.441864  110118 httplog.go:90] GET /apis/scheduling.k8s.io/v1/priorityclasses/system-cluster-critical: (920.035µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.442897  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (894.846µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41624]
I1109 00:29:25.444709  110118 httplog.go:90] POST /apis/scheduling.k8s.io/v1/priorityclasses: (2.113419ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.445231  110118 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I1109 00:29:25.445279  110118 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I1109 00:29:25.445338  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.964827ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41624]
I1109 00:29:25.446892  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.018135ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.448373  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.007797ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.449766  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (798.836µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.451286  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.451526  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.451677  110118 httplog.go:90] GET /healthz: (1.823655ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.451386  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.134957ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.453039  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/cluster-admin: (909.783µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.455801  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.18679ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.456006  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/cluster-admin
I1109 00:29:25.457319  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:discovery: (1.090781ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.460065  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.232452ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.460385  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:discovery
I1109 00:29:25.461861  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:basic-user: (1.049226ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.464137  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.78483ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.464421  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:basic-user
I1109 00:29:25.465966  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:public-info-viewer: (1.307598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.468610  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.039028ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.468903  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:public-info-viewer
I1109 00:29:25.470790  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/admin: (1.59647ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.474104  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.310642ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.474592  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/admin
I1109 00:29:25.479514  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/edit: (4.670703ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.483158  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.742962ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.484034  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/edit
I1109 00:29:25.485773  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/view: (1.319504ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.488275  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.938903ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.488604  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/view
I1109 00:29:25.490625  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-admin: (1.771714ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.493590  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.18807ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.493902  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin
I1109 00:29:25.495646  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-edit: (1.402914ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.499418  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.923702ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.500007  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit
I1109 00:29:25.501958  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:aggregate-to-view: (1.491095ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.506302  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.654491ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.506586  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view
I1109 00:29:25.508028  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:heapster: (1.107399ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.510503  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.876344ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.511144  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:heapster
I1109 00:29:25.513815  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node: (1.176288ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.516504  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.110084ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.516847  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node
I1109 00:29:25.518498  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-problem-detector: (1.267545ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.521328  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.327481ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.521600  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector
I1109 00:29:25.523068  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kubelet-api-admin: (1.225988ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.527826  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (4.244613ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.528116  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kubelet-api-admin
I1109 00:29:25.530345  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-bootstrapper: (1.952495ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.533206  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.14896ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.533457  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper
I1109 00:29:25.535871  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:auth-delegator: (1.271927ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.537684  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.537715  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.537756  110118 httplog.go:90] GET /healthz: (952.782µs) 0 [Go-http-client/1.1 127.0.0.1:41660]
I1109 00:29:25.538371  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.802328ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.538599  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator
I1109 00:29:25.540214  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-aggregator: (1.414424ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.542837  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.978294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.543360  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator
I1109 00:29:25.544851  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-controller-manager: (1.222275ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.547816  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.116966ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.548343  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager
I1109 00:29:25.550093  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-dns: (1.367324ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.551359  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.551425  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.551462  110118 httplog.go:90] GET /healthz: (1.615061ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.553031  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.236325ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.553565  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-dns
I1109 00:29:25.556154  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:persistent-volume-provisioner: (2.321606ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.559305  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.356135ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.559677  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner
I1109 00:29:25.561460  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:nodeclient: (1.392579ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.564586  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.492509ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.564925  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient
I1109 00:29:25.566361  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient: (1.110699ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.569035  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.185713ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.569298  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
I1109 00:29:25.570575  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:volume-scheduler: (1.028919ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.573308  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.214956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.573727  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:volume-scheduler
I1109 00:29:25.575412  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:node-proxier: (1.399347ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.578578  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.678518ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.578886  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:node-proxier
I1109 00:29:25.584205  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:kube-scheduler: (4.978527ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.587908  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.014733ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.588519  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler
I1109 00:29:25.590590  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:attachdetach-controller: (1.783125ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.593479  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.357759ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.593771  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1109 00:29:25.595833  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:clusterrole-aggregation-controller: (1.818059ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.598555  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.223414ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.598905  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1109 00:29:25.600715  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:cronjob-controller: (1.413282ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.603831  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.524244ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.604185  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1109 00:29:25.605897  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:daemon-set-controller: (1.394924ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.608681  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.23583ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.608945  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1109 00:29:25.610943  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:deployment-controller: (1.720125ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.613941  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.285666ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.614422  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller
I1109 00:29:25.616419  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:disruption-controller: (1.438214ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.619187  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.177657ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.619547  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller
I1109 00:29:25.620942  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpoint-controller: (1.080696ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.623491  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.030473ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.623812  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1109 00:29:25.627447  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:endpointslice-controller: (3.396474ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.630605  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.561404ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.630855  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I1109 00:29:25.632231  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:expand-controller: (1.131624ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.635150  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.394663ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.635553  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:expand-controller
I1109 00:29:25.636803  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:generic-garbage-collector: (997.625µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.638356  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.638491  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.638674  110118 httplog.go:90] GET /healthz: (1.874402ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I1109 00:29:25.639809  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.424474ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.640145  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1109 00:29:25.641346  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:horizontal-pod-autoscaler: (918.092µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.644474  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.404381ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.644856  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1109 00:29:25.646778  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:job-controller: (1.566075ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.649194  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.936886ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.649502  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller
I1109 00:29:25.650980  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:namespace-controller: (1.130315ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.651056  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.651081  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.651122  110118 httplog.go:90] GET /healthz: (1.136093ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.654905  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.408683ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.655195  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller
I1109 00:29:25.656658  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:node-controller: (1.186269ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.659216  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.032566ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.659504  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller
I1109 00:29:25.661054  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:persistent-volume-binder: (1.28072ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.666646  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (5.027906ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.667046  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1109 00:29:25.668781  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pod-garbage-collector: (1.405928ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.672042  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.522395ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.672686  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1109 00:29:25.674576  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replicaset-controller: (1.48928ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.677675  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.456931ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.678000  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1109 00:29:25.680073  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:replication-controller: (1.78344ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.683141  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.400771ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.683421  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller
I1109 00:29:25.687742  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:resourcequota-controller: (4.063438ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.691005  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.600087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.691353  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1109 00:29:25.692799  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:route-controller: (1.193048ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.695361  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.079756ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.695709  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller
I1109 00:29:25.697089  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-account-controller: (1.13264ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.700161  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.392488ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.700432  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller
I1109 00:29:25.702134  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:service-controller: (1.45494ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.705966  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (3.414184ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.706288  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller
I1109 00:29:25.708551  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:statefulset-controller: (1.988346ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.711884  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.901445ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.712112  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1109 00:29:25.713636  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:ttl-controller: (1.285521ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.716556  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.242188ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.716862  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller
I1109 00:29:25.718071  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:certificate-controller: (1.03253ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.721393  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.697199ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.721759  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller
I1109 00:29:25.723157  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pvc-protection-controller: (1.061451ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.726103  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (2.312842ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.726362  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1109 00:29:25.727685  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterroles/system:controller:pv-protection-controller: (1.109875ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.729979  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterroles: (1.95777ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.730173  110118 storage_rbac.go:219] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1109 00:29:25.731344  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin: (931.453µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.738549  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.479671ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:25.738856  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin
I1109 00:29:25.739081  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.739111  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.739148  110118 httplog.go:90] GET /healthz: (2.433003ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I1109 00:29:25.752100  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.752152  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.752228  110118 httplog.go:90] GET /healthz: (2.13325ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.757876  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:discovery: (1.559433ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.779546  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.108634ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.779932  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery
I1109 00:29:25.798146  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:basic-user: (1.732345ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.819525  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.1861ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.819889  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user
I1109 00:29:25.838124  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.838166  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:public-info-viewer: (1.858469ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.838172  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.838221  110118 httplog.go:90] GET /healthz: (1.413043ms) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:25.851976  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.852200  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.852510  110118 httplog.go:90] GET /healthz: (2.411894ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.859009  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.720707ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.859464  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:public-info-viewer
I1109 00:29:25.878152  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node-proxier: (1.841617ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.899408  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.148775ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.900059  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier
I1109 00:29:25.918287  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-controller-manager: (1.973598ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.938278  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.938313  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.938350  110118 httplog.go:90] GET /healthz: (1.495957ms) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:25.939900  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.672755ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.940576  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager
I1109 00:29:25.951464  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:25.951508  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:25.951555  110118 httplog.go:90] GET /healthz: (1.630321ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.957666  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-dns: (1.55683ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.979348  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.977128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:25.979612  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns
I1109 00:29:25.998699  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:kube-scheduler: (2.554315ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.018486  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.322565ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.018869  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler
I1109 00:29:26.037707  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:volume-scheduler: (1.568178ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.039155  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.039197  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.039285  110118 httplog.go:90] GET /healthz: (1.985062ms) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:26.050931  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.051174  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.051422  110118 httplog.go:90] GET /healthz: (1.498557ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.059355  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.136956ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.059943  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:volume-scheduler
I1109 00:29:26.078055  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:node: (1.721878ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.098931  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.738006ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.099262  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:node
I1109 00:29:26.118653  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:attachdetach-controller: (2.022799ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.138747  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.442039ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.139343  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller
I1109 00:29:26.141179  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.141213  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.141285  110118 httplog.go:90] GET /healthz: (4.406012ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I1109 00:29:26.151616  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.151745  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.151796  110118 httplog.go:90] GET /healthz: (1.796339ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.158002  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:clusterrole-aggregation-controller: (1.812911ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.179851  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.431964ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.180429  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller
I1109 00:29:26.198085  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:cronjob-controller: (1.906334ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.218855  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.4743ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.219385  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller
I1109 00:29:26.237745  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:daemon-set-controller: (1.617857ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.239126  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.239184  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.239223  110118 httplog.go:90] GET /healthz: (1.683116ms) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:26.251086  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.251127  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.251168  110118 httplog.go:90] GET /healthz: (1.258594ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.258473  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.394242ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.258886  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller
I1109 00:29:26.278119  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:deployment-controller: (1.936832ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.298569  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.319222ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.299023  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller
I1109 00:29:26.317682  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:disruption-controller: (1.489914ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.338438  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.338692  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.338517  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.278684ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.339133  110118 httplog.go:90] GET /healthz: (2.195505ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I1109 00:29:26.339212  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller
I1109 00:29:26.351121  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.351152  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.351191  110118 httplog.go:90] GET /healthz: (1.259003ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.358017  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpoint-controller: (1.824592ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.378784  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.562987ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.379055  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
I1109 00:29:26.397526  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:endpointslice-controller: (1.466239ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.419020  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.79836ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.419405  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpointslice-controller
I1109 00:29:26.437373  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:expand-controller: (1.281804ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.437495  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.437696  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.437755  110118 httplog.go:90] GET /healthz: (1.025942ms) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:26.451074  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.451111  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.451147  110118 httplog.go:90] GET /healthz: (1.230858ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.458353  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.263158ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.458748  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:expand-controller
I1109 00:29:26.477868  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:generic-garbage-collector: (1.674703ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.498844  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.590753ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.499140  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
I1109 00:29:26.518542  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:horizontal-pod-autoscaler: (2.383553ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.538329  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.184786ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.538589  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
I1109 00:29:26.539323  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.539441  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.539498  110118 httplog.go:90] GET /healthz: (2.663717ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I1109 00:29:26.551062  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.551090  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.551128  110118 httplog.go:90] GET /healthz: (1.191877ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.557761  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:job-controller: (1.561734ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.578836  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.426943ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.579386  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
I1109 00:29:26.598030  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:namespace-controller: (1.77842ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.623790  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (7.530107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.624364  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
I1109 00:29:26.637903  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:node-controller: (1.708316ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.638357  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.638383  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.638419  110118 httplog.go:90] GET /healthz: (1.455904ms) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:26.651417  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.651453  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.651507  110118 httplog.go:90] GET /healthz: (1.474275ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.660032  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.192436ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.660439  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
I1109 00:29:26.677460  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:persistent-volume-binder: (1.322222ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.698576  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.499145ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.698878  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
I1109 00:29:26.717875  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pod-garbage-collector: (1.65382ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.739291  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.915387ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.739560  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
I1109 00:29:26.740186  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.740224  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.740325  110118 httplog.go:90] GET /healthz: (3.492404ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I1109 00:29:26.751414  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.751596  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.751721  110118 httplog.go:90] GET /healthz: (1.758279ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.757874  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replicaset-controller: (1.704606ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.778832  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.690819ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.779156  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
I1109 00:29:26.800342  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:replication-controller: (3.830716ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.818873  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.658593ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.819151  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
I1109 00:29:26.838150  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.838184  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.838227  110118 httplog.go:90] GET /healthz: (1.344869ms) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:26.838520  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:resourcequota-controller: (2.290296ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.851379  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.851429  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.851499  110118 httplog.go:90] GET /healthz: (1.547005ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.859790  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (3.612129ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.860100  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
I1109 00:29:26.877988  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:route-controller: (1.733345ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.898794  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.453316ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.899204  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
I1109 00:29:26.918208  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-account-controller: (1.861799ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.938304  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.938346  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.938386  110118 httplog.go:90] GET /healthz: (1.592922ms) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:26.939090  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.830087ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:26.939443  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
I1109 00:29:26.951710  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:26.951751  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:26.951804  110118 httplog.go:90] GET /healthz: (1.78165ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.957651  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:service-controller: (1.550297ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.979087  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.890219ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:26.979391  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
I1109 00:29:26.997727  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:statefulset-controller: (1.516779ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.019078  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.866122ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.019423  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
I1109 00:29:27.039641  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.039876  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.040068  110118 httplog.go:90] GET /healthz: (3.167696ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I1109 00:29:27.039814  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:ttl-controller: (3.655864ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.051195  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.051223  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.051297  110118 httplog.go:90] GET /healthz: (1.282494ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.058885  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.76176ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.059314  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
I1109 00:29:27.078081  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:certificate-controller: (1.856657ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.099066  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.633008ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.099363  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
I1109 00:29:27.118087  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pvc-protection-controller: (1.6512ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.138050  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.138096  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.138140  110118 httplog.go:90] GET /healthz: (1.470599ms) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:27.138831  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.509228ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.139181  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
I1109 00:29:27.151662  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.151717  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.151770  110118 httplog.go:90] GET /healthz: (1.462148ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.157996  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/system:controller:pv-protection-controller: (1.859298ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.178633  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/clusterrolebindings: (2.60389ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.178970  110118 storage_rbac.go:247] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
I1109 00:29:27.198958  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/extension-apiserver-authentication-reader: (2.002343ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.201475  110118 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.794527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.220097  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.925823ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.220390  110118 storage_rbac.go:278] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
I1109 00:29:27.237616  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:bootstrap-signer: (1.516695ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.239146  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.239393  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.239608  110118 httplog.go:90] GET /healthz: (2.563865ms) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:27.239893  110118 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.541218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.251404  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.251444  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.251487  110118 httplog.go:90] GET /healthz: (1.463254ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.258892  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.640138ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.259378  110118 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1109 00:29:27.278019  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:cloud-provider: (1.801445ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.280266  110118 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.572739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.298908  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.781586ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.299259  110118 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1109 00:29:27.318218  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system:controller:token-cleaner: (1.915481ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.320793  110118 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.796182ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.338169  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.338210  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.338276  110118 httplog.go:90] GET /healthz: (1.551874ms) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:27.339178  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (3.000124ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.339471  110118 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1109 00:29:27.351793  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.351838  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.351911  110118 httplog.go:90] GET /healthz: (1.881788ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.358030  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-controller-manager: (1.669132ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.360029  110118 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.418959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.378919  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (2.600682ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.379310  110118 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1109 00:29:27.397891  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles/system::leader-locking-kube-scheduler: (1.676616ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.400920  110118 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.384443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.420392  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles: (4.162321ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.420807  110118 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1109 00:29:27.437518  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles/system:controller:bootstrap-signer: (1.395337ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.437678  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.437698  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.437740  110118 httplog.go:90] GET /healthz: (985.415µs) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:27.439448  110118 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.435418ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.451407  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.451452  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.451560  110118 httplog.go:90] GET /healthz: (1.551918ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.458429  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/roles: (2.294042ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.458663  110118 storage_rbac.go:278] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1109 00:29:27.477871  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::extension-apiserver-authentication-reader: (1.67888ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.480567  110118 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.13063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.499154  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.884435ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.499474  110118 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::extension-apiserver-authentication-reader in kube-system
I1109 00:29:27.517925  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-controller-manager: (1.753507ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.520533  110118 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.700044ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.538762  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.655195ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.539443  110118 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
I1109 00:29:27.540272  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.540427  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.540604  110118 httplog.go:90] GET /healthz: (3.875762ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I1109 00:29:27.551029  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.551092  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.551134  110118 httplog.go:90] GET /healthz: (1.219699ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.559801  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system::leader-locking-kube-scheduler: (3.751936ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.562408  110118 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.760621ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.580065  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.808431ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.580353  110118 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
I1109 00:29:27.597525  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:bootstrap-signer: (1.426451ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.600414  110118 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.916704ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.619439  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (3.135984ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.619951  110118 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
I1109 00:29:27.638095  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.638147  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.638171  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:cloud-provider: (1.932365ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.638198  110118 httplog.go:90] GET /healthz: (1.345414ms) 0 [Go-http-client/1.1 127.0.0.1:41622]
I1109 00:29:27.640514  110118 httplog.go:90] GET /api/v1/namespaces/kube-system: (1.730296ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.651128  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.651181  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.651227  110118 httplog.go:90] GET /healthz: (1.234424ms) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.658833  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.635374ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.659337  110118 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
I1109 00:29:27.677772  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings/system:controller:token-cleaner: (1.56403ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.680533  110118 httplog.go:90] GET /api/v1/namespaces/kube-system: (2.114094ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.699888  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings: (2.731618ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.700527  110118 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
I1109 00:29:27.717941  110118 httplog.go:90] GET /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings/system:controller:bootstrap-signer: (1.794664ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.721839  110118 httplog.go:90] GET /api/v1/namespaces/kube-public: (1.975581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.738326  110118 healthz.go:177] healthz check poststarthook/rbac/bootstrap-roles failed: not finished
I1109 00:29:27.738395  110118 healthz.go:191] [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/bootstrap-controller ok
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/start-cluster-authentication-info-controller ok
healthz check failed
I1109 00:29:27.738445  110118 httplog.go:90] GET /healthz: (1.710351ms) 0 [Go-http-client/1.1 127.0.0.1:41660]
I1109 00:29:27.739434  110118 httplog.go:90] POST /apis/rbac.authorization.k8s.io/v1/namespaces/kube-public/rolebindings: (3.218128ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.739663  110118 storage_rbac.go:308] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
I1109 00:29:27.751579  110118 httplog.go:90] GET /healthz: (1.562295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.754121  110118 httplog.go:90] GET /api/v1/namespaces/default: (1.741672ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.756602  110118 httplog.go:90] POST /api/v1/namespaces: (1.950301ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.758517  110118 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.367049ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.763143  110118 httplog.go:90] POST /api/v1/namespaces/default/services: (4.264937ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.766753  110118 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (3.103719ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.769407  110118 httplog.go:90] POST /api/v1/namespaces/default/endpoints: (2.115667ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.770758  110118 httplog.go:90] GET /apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes: (974.125µs) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.773870  110118 httplog.go:90] POST /apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices: (2.636695ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.839287  110118 httplog.go:90] GET /healthz: (2.00239ms) 200 [Go-http-client/1.1 127.0.0.1:41622]
W1109 00:29:27.840706  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:29:27.840756  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:29:27.840803  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:29:27.840821  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:29:27.840856  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:29:27.840921  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:29:27.840937  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:29:27.840959  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:29:27.840977  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:29:27.840990  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:29:27.841002  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:29:27.841020  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 00:29:27.841041  110118 factory.go:300] Creating scheduler from algorithm provider 'DefaultProvider'
I1109 00:29:27.841052  110118 factory.go:392] Creating scheduler with fit predicates 'map[CheckNodeUnschedulable:{} CheckVolumeBinding:{} GeneralPredicates:{} MatchInterPodAffinity:{} MaxAzureDiskVolumeCount:{} MaxCSIVolumeCountPred:{} MaxEBSVolumeCount:{} MaxGCEPDVolumeCount:{} NoDiskConflict:{} NoVolumeZoneConflict:{} PodToleratesNodeTaints:{}]' and priority functions 'map[BalancedResourceAllocation:{} ImageLocalityPriority:{} InterPodAffinityPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} SelectorSpreadPriority:{} TaintTolerationPriority:{}]'
I1109 00:29:27.841506  110118 shared_informer.go:197] Waiting for caches to sync for scheduler
I1109 00:29:27.841763  110118 reflector.go:153] Starting reflector *v1.Pod (12h0m0s) from k8s.io/kubernetes/test/integration/scheduler/util.go:210
I1109 00:29:27.841786  110118 reflector.go:188] Listing and watching *v1.Pod from k8s.io/kubernetes/test/integration/scheduler/util.go:210
I1109 00:29:27.842853  110118 httplog.go:90] GET /api/v1/pods?fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&limit=500&resourceVersion=0: (670.135µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:27.843766  110118 get.go:251] Starting watch for /api/v1/pods, rv=56023 labels= fields=status.phase!=Failed,status.phase!=Succeeded timeout=5m55s
I1109 00:29:27.941687  110118 shared_informer.go:227] caches populated
I1109 00:29:27.941721  110118 shared_informer.go:204] Caches are synced for scheduler 
I1109 00:29:27.942072  110118 reflector.go:153] Starting reflector *v1.Service (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.942095  110118 reflector.go:188] Listing and watching *v1.Service from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.942099  110118 reflector.go:153] Starting reflector *v1.StorageClass (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.942116  110118 reflector.go:188] Listing and watching *v1.StorageClass from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.942124  110118 reflector.go:153] Starting reflector *v1.Node (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.942136  110118 reflector.go:188] Listing and watching *v1.Node from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.942344  110118 reflector.go:153] Starting reflector *v1beta1.CSINode (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.942359  110118 reflector.go:188] Listing and watching *v1beta1.CSINode from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.942438  110118 reflector.go:153] Starting reflector *v1.ReplicaSet (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.942453  110118 reflector.go:188] Listing and watching *v1.ReplicaSet from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.942510  110118 reflector.go:153] Starting reflector *v1.PersistentVolumeClaim (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.942521  110118 reflector.go:188] Listing and watching *v1.PersistentVolumeClaim from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.942789  110118 reflector.go:153] Starting reflector *v1.ReplicationController (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.942803  110118 reflector.go:188] Listing and watching *v1.ReplicationController from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.943022  110118 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0: (530.816µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.943064  110118 reflector.go:153] Starting reflector *v1.StatefulSet (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.943075  110118 reflector.go:188] Listing and watching *v1.StatefulSet from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.943492  110118 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?limit=500&resourceVersion=0: (449.792µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41980]
I1109 00:29:27.943572  110118 httplog.go:90] GET /api/v1/persistentvolumeclaims?limit=500&resourceVersion=0: (540.898µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41984]
I1109 00:29:27.943572  110118 httplog.go:90] GET /api/v1/replicationcontrollers?limit=500&resourceVersion=0: (385.881µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41986]
I1109 00:29:27.943924  110118 httplog.go:90] GET /apis/apps/v1/statefulsets?limit=500&resourceVersion=0: (354.06µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41988]
I1109 00:29:27.943966  110118 httplog.go:90] GET /apis/apps/v1/replicasets?limit=500&resourceVersion=0: (241.319µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41982]
I1109 00:29:27.944067  110118 httplog.go:90] GET /api/v1/services?limit=500&resourceVersion=0: (373.115µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41992]
I1109 00:29:27.944272  110118 httplog.go:90] GET /api/v1/nodes?limit=500&resourceVersion=0: (312.818µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41990]
I1109 00:29:27.944273  110118 get.go:251] Starting watch for /apis/storage.k8s.io/v1beta1/csinodes, rv=56023 labels= fields= timeout=5m53s
I1109 00:29:27.944288  110118 get.go:251] Starting watch for /api/v1/replicationcontrollers, rv=56023 labels= fields= timeout=6m4s
I1109 00:29:27.944314  110118 get.go:251] Starting watch for /api/v1/persistentvolumeclaims, rv=56023 labels= fields= timeout=8m1s
I1109 00:29:27.944543  110118 get.go:251] Starting watch for /apis/apps/v1/statefulsets, rv=56023 labels= fields= timeout=8m26s
I1109 00:29:27.944711  110118 get.go:251] Starting watch for /apis/apps/v1/replicasets, rv=56023 labels= fields= timeout=6m58s
I1109 00:29:27.944730  110118 get.go:251] Starting watch for /api/v1/nodes, rv=56023 labels= fields= timeout=8m16s
I1109 00:29:27.944867  110118 reflector.go:153] Starting reflector *v1.PersistentVolume (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.944881  110118 reflector.go:188] Listing and watching *v1.PersistentVolume from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.944997  110118 reflector.go:153] Starting reflector *v1beta1.PodDisruptionBudget (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.945011  110118 reflector.go:188] Listing and watching *v1beta1.PodDisruptionBudget from k8s.io/client-go/informers/factory.go:135
I1109 00:29:27.945150  110118 get.go:251] Starting watch for /api/v1/services, rv=56318 labels= fields= timeout=6m49s
I1109 00:29:27.945561  110118 get.go:251] Starting watch for /apis/storage.k8s.io/v1/storageclasses, rv=56023 labels= fields= timeout=6m24s
I1109 00:29:27.945618  110118 httplog.go:90] GET /api/v1/persistentvolumes?limit=500&resourceVersion=0: (288.677µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41660]
I1109 00:29:27.945618  110118 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?limit=500&resourceVersion=0: (251.784µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41996]
I1109 00:29:27.946160  110118 get.go:251] Starting watch for /apis/policy/v1beta1/poddisruptionbudgets, rv=56023 labels= fields= timeout=5m48s
I1109 00:29:27.946179  110118 get.go:251] Starting watch for /api/v1/persistentvolumes, rv=56023 labels= fields= timeout=9m9s
I1109 00:29:28.042019  110118 shared_informer.go:227] caches populated
I1109 00:29:28.042056  110118 shared_informer.go:227] caches populated
I1109 00:29:28.042062  110118 shared_informer.go:227] caches populated
I1109 00:29:28.042066  110118 shared_informer.go:227] caches populated
I1109 00:29:28.042092  110118 shared_informer.go:227] caches populated
I1109 00:29:28.042114  110118 shared_informer.go:227] caches populated
I1109 00:29:28.042121  110118 shared_informer.go:227] caches populated
I1109 00:29:28.042126  110118 shared_informer.go:227] caches populated
I1109 00:29:28.042131  110118 shared_informer.go:227] caches populated
I1109 00:29:28.042140  110118 shared_informer.go:227] caches populated
I1109 00:29:28.042583  110118 shared_informer.go:227] caches populated
I1109 00:29:28.045497  110118 httplog.go:90] POST /api/v1/namespaces: (2.50294ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42056]
I1109 00:29:28.045842  110118 node_lifecycle_controller.go:388] Sending events to api server.
I1109 00:29:28.045941  110118 node_lifecycle_controller.go:423] Controller is using taint based evictions.
W1109 00:29:28.045972  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 00:29:28.046058  110118 taint_manager.go:162] Sending events to api server.
I1109 00:29:28.046128  110118 node_lifecycle_controller.go:520] Controller will reconcile labels.
W1109 00:29:28.046175  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
W1109 00:29:28.046201  110118 mutation_detector.go:50] Mutation detector is enabled, this will result in memory leakage.
I1109 00:29:28.046277  110118 node_lifecycle_controller.go:554] Starting node controller
I1109 00:29:28.046298  110118 shared_informer.go:197] Waiting for caches to sync for taint
I1109 00:29:28.046500  110118 reflector.go:153] Starting reflector *v1.Namespace (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:28.046531  110118 reflector.go:188] Listing and watching *v1.Namespace from k8s.io/client-go/informers/factory.go:135
I1109 00:29:28.048030  110118 httplog.go:90] GET /api/v1/namespaces?limit=500&resourceVersion=0: (1.086249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42056]
I1109 00:29:28.049113  110118 get.go:251] Starting watch for /api/v1/namespaces, rv=56331 labels= fields= timeout=8m30s
I1109 00:29:28.146504  110118 shared_informer.go:227] caches populated
I1109 00:29:28.146780  110118 shared_informer.go:227] caches populated
I1109 00:29:28.147042  110118 reflector.go:153] Starting reflector *v1.Pod (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:28.147059  110118 reflector.go:188] Listing and watching *v1.Pod from k8s.io/client-go/informers/factory.go:135
I1109 00:29:28.147200  110118 reflector.go:153] Starting reflector *v1.Lease (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:28.147233  110118 reflector.go:188] Listing and watching *v1.Lease from k8s.io/client-go/informers/factory.go:135
I1109 00:29:28.147361  110118 reflector.go:153] Starting reflector *v1.DaemonSet (1s) from k8s.io/client-go/informers/factory.go:135
I1109 00:29:28.147380  110118 reflector.go:188] Listing and watching *v1.DaemonSet from k8s.io/client-go/informers/factory.go:135
I1109 00:29:28.148352  110118 httplog.go:90] GET /apis/apps/v1/daemonsets?limit=500&resourceVersion=0: (440.426µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42064]
I1109 00:29:28.148355  110118 httplog.go:90] GET /api/v1/pods?limit=500&resourceVersion=0: (612.861µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42060]
I1109 00:29:28.148423  110118 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?limit=500&resourceVersion=0: (499.835µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42062]
I1109 00:29:28.148938  110118 get.go:251] Starting watch for /apis/apps/v1/daemonsets, rv=56023 labels= fields= timeout=5m18s
I1109 00:29:28.148938  110118 get.go:251] Starting watch for /api/v1/pods, rv=56023 labels= fields= timeout=8m1s
I1109 00:29:28.149505  110118 shared_informer.go:227] caches populated
I1109 00:29:28.149602  110118 shared_informer.go:227] caches populated
I1109 00:29:28.149687  110118 shared_informer.go:227] caches populated
I1109 00:29:28.149701  110118 shared_informer.go:227] caches populated
I1109 00:29:28.149779  110118 shared_informer.go:227] caches populated
I1109 00:29:28.149898  110118 shared_informer.go:227] caches populated
I1109 00:29:28.149909  110118 shared_informer.go:227] caches populated
I1109 00:29:28.149916  110118 shared_informer.go:227] caches populated
I1109 00:29:28.149911  110118 get.go:251] Starting watch for /apis/coordination.k8s.io/v1/leases, rv=56023 labels= fields= timeout=9m55s
I1109 00:29:28.149925  110118 shared_informer.go:227] caches populated
I1109 00:29:28.149933  110118 shared_informer.go:227] caches populated
I1109 00:29:28.149938  110118 shared_informer.go:227] caches populated
I1109 00:29:28.149944  110118 shared_informer.go:227] caches populated
I1109 00:29:28.153900  110118 httplog.go:90] POST /api/v1/nodes: (2.650905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.154080  110118 node_tree.go:86] Added node "node-0" in group "region1:\x00:zone1" to NodeTree
I1109 00:29:28.158465  110118 httplog.go:90] POST /api/v1/nodes: (4.089863ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.159141  110118 node_tree.go:86] Added node "node-1" in group "region1:\x00:zone1" to NodeTree
I1109 00:29:28.161336  110118 httplog.go:90] POST /api/v1/nodes: (2.30107ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.162054  110118 node_tree.go:86] Added node "node-2" in group "region1:\x00:zone1" to NodeTree
I1109 00:29:28.164325  110118 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/pods: (2.231872ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.165339  110118 scheduling_queue.go:841] About to try and schedule pod taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/testpod-0
I1109 00:29:28.165360  110118 scheduler.go:611] Attempting to schedule pod: taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/testpod-0
I1109 00:29:28.166185  110118 scheduler_binder.go:257] AssumePodVolumes for pod "taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/testpod-0", node "node-2"
I1109 00:29:28.166205  110118 scheduler_binder.go:267] AssumePodVolumes for pod "taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/testpod-0", node "node-2": all PVCs bound and nothing to do
I1109 00:29:28.166358  110118 factory.go:698] Attempting to bind testpod-0 to node-2
I1109 00:29:28.169366  110118 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/pods/testpod-0/binding: (2.695133ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.169842  110118 scheduler.go:756] pod taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/testpod-0 is bound successfully on node "node-2", 3 nodes evaluated, 3 nodes were found feasible.
I1109 00:29:28.172520  110118 httplog.go:90] POST /apis/events.k8s.io/v1beta1/namespaces/taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/events: (2.280459ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.246437  110118 shared_informer.go:227] caches populated
I1109 00:29:28.246470  110118 shared_informer.go:204] Caches are synced for taint 
I1109 00:29:28.246591  110118 node_lifecycle_controller.go:787] Controller observed a new Node: "node-0"
I1109 00:29:28.246656  110118 controller_utils.go:167] Recording Registered Node node-0 in Controller event message for node node-0
I1109 00:29:28.246693  110118 node_lifecycle_controller.go:1443] Initializing eviction metric for zone: region1:�:zone1
I1109 00:29:28.246715  110118 node_lifecycle_controller.go:787] Controller observed a new Node: "node-1"
I1109 00:29:28.246722  110118 controller_utils.go:167] Recording Registered Node node-1 in Controller event message for node node-1
I1109 00:29:28.246733  110118 node_lifecycle_controller.go:787] Controller observed a new Node: "node-2"
I1109 00:29:28.246741  110118 controller_utils.go:167] Recording Registered Node node-2 in Controller event message for node node-2
I1109 00:29:28.246803  110118 taint_manager.go:186] Starting NoExecuteTaintManager
W1109 00:29:28.246851  110118 node_lifecycle_controller.go:1058] Missing timestamp for Node node-0. Assuming now as a timestamp.
W1109 00:29:28.246906  110118 node_lifecycle_controller.go:1058] Missing timestamp for Node node-1. Assuming now as a timestamp.
W1109 00:29:28.246951  110118 node_lifecycle_controller.go:1058] Missing timestamp for Node node-2. Assuming now as a timestamp.
I1109 00:29:28.246987  110118 node_lifecycle_controller.go:1259] Controller detected that zone region1:�:zone1 is now in state Normal.
I1109 00:29:28.247073  110118 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-0"}
I1109 00:29:28.247094  110118 taint_manager.go:438] Updating known taints on node node-0: []
I1109 00:29:28.247093  110118 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-1"}
I1109 00:29:28.247141  110118 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I1109 00:29:28.247151  110118 taint_manager.go:438] Updating known taints on node node-1: []
I1109 00:29:28.247178  110118 taint_manager.go:438] Updating known taints on node node-2: []
I1109 00:29:28.247191  110118 taint_manager.go:459] All taints were removed from the Node node-2. Cancelling all evictions...
I1109 00:29:28.247204  110118 taint_manager.go:398] Noticed pod update: types.NamespacedName{Namespace:"taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847", Name:"testpod-0"}
I1109 00:29:28.247180  110118 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-2", UID:"f3c4a9a8-2473-47ed-a57f-c0c7a20cce99", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-2 event: Registered Node node-2 in Controller
I1109 00:29:28.247259  110118 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-0", UID:"e09cf128-ff9c-4cde-bcde-5308722b79d2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-0 event: Registered Node node-0 in Controller
I1109 00:29:28.247268  110118 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-1", UID:"cb224d6a-6f49-43aa-a8a3-66eb17651b6b", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-1 event: Registered Node node-1 in Controller
I1109 00:29:28.250052  110118 httplog.go:90] POST /api/v1/namespaces/default/events: (2.555647ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.253018  110118 httplog.go:90] POST /api/v1/namespaces/default/events: (2.389503ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.255818  110118 httplog.go:90] POST /api/v1/namespaces/default/events: (2.18905ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.267717  110118 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/pods/testpod-0: (2.390956ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.270177  110118 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/pods/testpod-0: (1.772246ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.272503  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.479539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.275583  110118 httplog.go:90] PUT /api/v1/nodes/node-2/status: (2.376491ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.276752  110118 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (458.595µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.282521  110118 httplog.go:90] PATCH /api/v1/nodes/node-2: (5.000605ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.283149  110118 controller_utils.go:203] Added [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-11-09 00:29:28.276005661 +0000 UTC m=+251.230839223,}] Taint to Node node-2
I1109 00:29:28.283303  110118 controller_utils.go:215] Made sure that Node node-2 has no [] Taint
I1109 00:29:28.378156  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.769322ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.478563  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.102599ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.579044  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.308464ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.681508  110118 httplog.go:90] GET /api/v1/nodes/node-2: (3.798542ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.778898  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.407099ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.878812  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.364309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:28.944164  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:28.944324  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:28.944739  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:28.944806  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:28.945500  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:28.946152  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:28.978531  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.051373ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:29.078474  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.018065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:29.148916  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:29.179124  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.804479ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:29.279920  110118 httplog.go:90] GET /api/v1/nodes/node-2: (3.49989ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:29.378419  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.016215ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:29.478531  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.039471ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:29.578957  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.171218ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:29.678598  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.154758ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:29.778790  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.367483ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:29.878751  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.205875ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:29.944485  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:29.944485  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:29.945033  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:29.945233  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:29.945709  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:29.946301  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:29.978705  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.128164ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:30.078527  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.11128ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:30.149117  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:30.165803  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.197229ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:30.168993  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (6.010224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:30.178684  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.090473ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:30.278653  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.267455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:30.378637  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.830748ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:30.479047  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.67384ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:30.578700  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.274171ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:30.678924  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.342202ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:30.778556  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.061501ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:30.878880  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.337848ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:30.944693  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:30.944757  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:30.945265  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:30.945420  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:30.945901  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:30.946456  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:30.978705  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.308173ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:31.079053  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.563759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:31.149320  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:31.178845  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.29428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:31.279189  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.79581ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:31.378573  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.219938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:31.481462  110118 httplog.go:90] GET /api/v1/nodes/node-2: (5.06612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:31.578672  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.256488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:31.679354  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.925018ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:31.779481  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.983885ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:31.881548  110118 httplog.go:90] GET /api/v1/nodes/node-2: (4.548727ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:31.944859  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:31.944894  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:31.945438  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:31.945622  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:31.946079  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:31.946770  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:31.978938  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.401267ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:32.079413  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.922086ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:32.149518  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:32.169877  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.125578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:32.173015  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.924033ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:32.178265  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.943514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:32.280684  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.231767ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:32.378659  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.080146ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:32.478613  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.857506ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:32.582511  110118 httplog.go:90] GET /api/v1/nodes/node-2: (5.121996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:32.680056  110118 httplog.go:90] GET /api/v1/nodes/node-2: (3.470184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:32.778706  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.302558ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
E1109 00:29:32.816064  110118 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:46651/apis/events.k8s.io/v1beta1/namespaces/permit-plugins9bb06b34-0385-4607-88a7-8da8fb12ea45/events: dial tcp 127.0.0.1:46651: connect: connection refused' (may retry after sleeping)
I1109 00:29:32.878784  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.367386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:32.945050  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:32.945099  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:32.945611  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:32.945924  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:32.946219  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:32.946894  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:32.978741  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.081889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:33.078763  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.256607ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:33.149632  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:33.178758  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.365064ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:33.247271  110118 node_lifecycle_controller.go:1092] Node node-0 ReadyCondition updated. Updating timestamp.
I1109 00:29:33.247369  110118 node_lifecycle_controller.go:1092] Node node-1 ReadyCondition updated. Updating timestamp.
I1109 00:29:33.247401  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 5.000442808s. Last Ready is: &NodeCondition{Type:Ready,Status:False,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,Reason:,Message:,}
I1109 00:29:33.247496  110118 node_lifecycle_controller.go:1127] Condition MemoryPressure of node node-2 was never updated by kubelet
I1109 00:29:33.247613  110118 node_lifecycle_controller.go:1127] Condition DiskPressure of node node-2 was never updated by kubelet
I1109 00:29:33.247633  110118 node_lifecycle_controller.go:1127] Condition PIDPressure of node node-2 was never updated by kubelet
I1109 00:29:33.252095  110118 httplog.go:90] PUT /api/v1/nodes/node-2/status: (3.80906ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:33.252618  110118 node_lifecycle_controller.go:886] Node node-2 is NotReady as of 2019-11-09 00:29:33.252596636 +0000 UTC m=+256.207430210. Adding it to the Taint queue.
I1109 00:29:33.255207  110118 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (1.177688ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:33.255723  110118 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (732.833µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:33.260869  110118 store.go:365] GuaranteedUpdate of /7f56772b-fbf6-4403-ae70-b2541a215198/minions/node-2 failed because of a conflict, going to retry
I1109 00:29:33.261046  110118 httplog.go:90] PATCH /api/v1/nodes/node-2: (4.730591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:33.262034  110118 controller_utils.go:203] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoSchedule,TimeAdded:2019-11-09 00:29:33.253644786 +0000 UTC m=+256.208478363,}] Taint to Node node-2
I1109 00:29:33.262946  110118 httplog.go:90] GET /api/v1/nodes/node-2?resourceVersion=0: (577.23µs) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:33.263958  110118 httplog.go:90] PATCH /api/v1/nodes/node-2: (7.22113ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:33.264315  110118 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I1109 00:29:33.264344  110118 taint_manager.go:438] Updating known taints on node node-2: [{node.kubernetes.io/unreachable  NoExecute 2019-11-09 00:29:33 +0000 UTC}]
I1109 00:29:33.264417  110118 timed_workers.go:110] Adding TimedWorkerQueue item taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/testpod-0 at 2019-11-09 00:29:33.264405159 +0000 UTC m=+256.219238735 to be fired at 2019-11-09 00:34:33.264405159 +0000 UTC m=+556.219238735
I1109 00:29:33.264447  110118 controller_utils.go:203] Added [&Taint{Key:node.kubernetes.io/unreachable,Value:,Effect:NoExecute,TimeAdded:2019-11-09 00:29:33.254709424 +0000 UTC m=+256.209542991,}] Taint to Node node-2
I1109 00:29:33.264491  110118 controller_utils.go:215] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoExecute,TimeAdded:<nil>,}] Taint
I1109 00:29:33.266586  110118 httplog.go:90] PATCH /api/v1/nodes/node-2: (2.832782ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:33.266754  110118 taint_manager.go:433] Noticed node update: scheduler.nodeUpdateItem{nodeName:"node-2"}
I1109 00:29:33.266778  110118 taint_manager.go:438] Updating known taints on node node-2: []
I1109 00:29:33.266800  110118 taint_manager.go:459] All taints were removed from the Node node-2. Cancelling all evictions...
I1109 00:29:33.266811  110118 timed_workers.go:129] Cancelling TimedWorkerQueue item taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/testpod-0 at 2019-11-09 00:29:33.266807075 +0000 UTC m=+256.221640646
I1109 00:29:33.267014  110118 event.go:281] Event(v1.ObjectReference{Kind:"Pod", Namespace:"taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847", Name:"testpod-0", UID:"", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TaintManagerEviction' Cancelling deletion of Pod taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/testpod-0
I1109 00:29:33.267393  110118 controller_utils.go:215] Made sure that Node node-2 has no [&Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:2019-11-09 00:29:28 +0000 UTC,}] Taint
I1109 00:29:33.269997  110118 httplog.go:90] POST /api/v1/namespaces/taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/events: (2.705796ms) 201 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:33.278769  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.4269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:33.378267  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.876535ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
E1109 00:29:33.418907  110118 factory.go:682] Error getting pod allocatable5de597eb-755b-4b67-bc0c-d94fd5d7c260/pod-test-allocatable2 for retry: Get http://127.0.0.1:34327/api/v1/namespaces/allocatable5de597eb-755b-4b67-bc0c-d94fd5d7c260/pods/pod-test-allocatable2: dial tcp 127.0.0.1:34327: connect: connection refused; retrying...
I1109 00:29:33.478611  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.190316ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:33.578687  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.085803ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:33.678757  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.322725ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:33.778711  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.354857ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:33.878219  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.867176ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:33.945165  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:33.945343  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:33.945861  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:33.946393  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:33.946447  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:33.947063  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:33.978804  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.345456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:34.080100  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.535935ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:34.149847  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:34.173751  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.935983ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:34.176647  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.752695ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:34.177836  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.49594ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:34.278973  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.588302ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:34.378645  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.233937ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:34.481226  110118 httplog.go:90] GET /api/v1/nodes/node-2: (4.750653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:34.582725  110118 httplog.go:90] GET /api/v1/nodes/node-2: (6.255251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:34.678619  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.178591ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:34.778548  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.08378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:34.880568  110118 httplog.go:90] GET /api/v1/nodes/node-2: (4.193712ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:34.945559  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:34.945662  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:34.946145  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:34.946677  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:34.946682  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:34.947235  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:34.982834  110118 httplog.go:90] GET /api/v1/nodes/node-2: (4.482571ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:35.078153  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.753036ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:35.150066  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:35.178585  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.101774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:35.278843  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.398759ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:35.378364  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.016539ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:35.478608  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.217921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:35.579093  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.361681ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:35.678745  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.213412ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:35.778796  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.286882ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:35.878413  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.038509ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:35.945732  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:35.945972  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:35.946688  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:35.946852  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:35.946900  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:35.947404  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:35.979014  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.431356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:36.077914  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.604335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:36.150328  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:36.177464  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.555923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:36.178151  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.776733ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:36.180684  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (3.067163ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42576]
I1109 00:29:36.278106  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.766386ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:36.378534  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.098238ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:36.478407  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.08958ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:36.578361  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.945358ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:36.678976  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.562325ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:36.778958  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.358032ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:36.878717  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.249525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:36.945932  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:36.946291  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:36.946843  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:36.946958  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:36.947046  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:36.947582  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:36.978765  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.318369ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.078548  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.91529ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.150546  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:37.187862  110118 httplog.go:90] GET /api/v1/nodes/node-2: (11.264274ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.278388  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.025269ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.378277  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.883612ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.478585  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.013801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.578546  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.099597ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.678790  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.266378ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.754641  110118 httplog.go:90] GET /api/v1/namespaces/default: (1.662076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.756664  110118 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.50801ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.758791  110118 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.693648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.760898  110118 httplog.go:90] GET /apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes: (1.520058ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.778925  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.538365ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.878528  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.016488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:37.946095  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:37.946457  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:37.947055  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:37.947119  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:37.947313  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:37.947758  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:37.978785  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.145776ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:38.078573  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.24997ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:38.150757  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:38.179186  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.710763ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:38.181022  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.591804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:38.184932  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.706089ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:38.252877  110118 node_lifecycle_controller.go:1092] Node node-1 ReadyCondition updated. Updating timestamp.
I1109 00:29:38.253131  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 10.006168992s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1109 00:29:38.253181  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 10.006222806s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:38.253197  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 10.006239043s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:38.253211  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 10.006252661s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:38.253291  110118 node_lifecycle_controller.go:1092] Node node-0 ReadyCondition updated. Updating timestamp.
I1109 00:29:38.278394  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.071545ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:38.378472  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.003435ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:38.481005  110118 httplog.go:90] GET /api/v1/nodes/node-2: (4.766069ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:38.578199  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.754357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:38.678885  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.399792ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:38.779253  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.627465ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:38.878278  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.987862ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:38.946443  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:38.946627  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:38.947231  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:38.947321  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:38.947489  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:38.947912  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:38.978367  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.93928ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:39.078433  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.031528ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:39.150963  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:39.178226  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.925183ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:39.278874  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.355752ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:39.378839  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.351736ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:39.478852  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.413957ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:39.578500  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.993213ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:39.679565  110118 httplog.go:90] GET /api/v1/nodes/node-2: (3.048242ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:39.778276  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.771309ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:39.881652  110118 httplog.go:90] GET /api/v1/nodes/node-2: (3.211643ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:39.946759  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:39.946844  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:39.947408  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:39.947579  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:39.947642  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:39.948367  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:39.978399  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.946912ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:40.078721  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.254046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:40.151329  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:40.178955  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.384422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:40.184974  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.013452ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:40.188997  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (3.076527ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:40.278581  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.175666ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:40.378755  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.139422ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:40.478176  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.863648ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:40.579161  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.602578ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:40.678619  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.263281ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:40.778993  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.550574ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:40.878213  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.766396ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:40.946974  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:40.947055  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:40.947583  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:40.947735  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:40.947865  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:40.948552  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:40.978375  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.014427ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:41.078517  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.094986ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:41.151574  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:41.178737  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.2443ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:41.278612  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.171093ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:41.378564  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.091505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:41.478727  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.134366ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:41.578949  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.436974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:41.683668  110118 httplog.go:90] GET /api/v1/nodes/node-2: (7.157352ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:41.779156  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.739364ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:41.879016  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.573513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:41.947134  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:41.947170  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:41.947757  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:41.947856  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:41.948100  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:41.948749  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:41.978463  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.019827ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:42.078827  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.438627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:42.151782  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:42.178520  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.127576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:42.189911  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.675368ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:42.197538  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (4.657979ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:42.278723  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.264446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:42.379284  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.832845ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:42.478519  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.144859ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:42.578970  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.559512ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:42.678174  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.770389ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:42.779940  110118 httplog.go:90] GET /api/v1/nodes/node-2: (3.402982ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:42.878769  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.269065ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:42.947307  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:42.947383  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:42.947940  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:42.948032  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:42.948330  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:42.948919  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:42.978805  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.390076ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
E1109 00:29:43.039882  110118 event_broadcaster.go:247] Unable to write event: 'Post http://127.0.0.1:46651/apis/events.k8s.io/v1beta1/namespaces/permit-plugins9bb06b34-0385-4607-88a7-8da8fb12ea45/events: dial tcp 127.0.0.1:46651: connect: connection refused' (may retry after sleeping)
E1109 00:29:43.039941  110118 event_broadcaster.go:197] Unable to write event '&v1beta1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"test-pod.15d556f6ce3fc760", GenerateName:"", Namespace:"permit-plugins9bb06b34-0385-4607-88a7-8da8fb12ea45", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, EventTime:v1.MicroTime{Time:time.Time{wall:0xbf699fdfb325390f, ext:145812911023, loc:(*time.Location)(0x7383f40)}}, Series:(*v1beta1.EventSeries)(nil), ReportingController:"default-scheduler", ReportingInstance:"default-scheduler-94b806c3-0283-11ea-8234-8a7b684c2658", Action:"Scheduling", Reason:"FailedScheduling", Regarding:v1.ObjectReference{Kind:"Pod", Namespace:"permit-plugins9bb06b34-0385-4607-88a7-8da8fb12ea45", Name:"test-pod", UID:"f26a75ba-cabf-4d2b-b65f-cbe83b33c3de", APIVersion:"v1", ResourceVersion:"29523", FieldPath:""}, Related:(*v1.ObjectReference)(nil), Note:"0/2 nodes are available: .", Type:"Warning", DeprecatedSource:v1.EventSource{Component:"default-scheduler", Host:""}, DeprecatedFirstTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedLastTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeprecatedCount:0}' (retry limit exceeded!)
I1109 00:29:43.079075  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.585889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:43.151986  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:43.178449  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.066993ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:43.253605  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 15.006627605s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1109 00:29:43.253727  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 15.006765031s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:43.253765  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 15.006804896s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:43.253788  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 15.006828412s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:43.254018  110118 node_lifecycle_controller.go:1092] Node node-0 ReadyCondition updated. Updating timestamp.
I1109 00:29:43.254073  110118 node_lifecycle_controller.go:1092] Node node-1 ReadyCondition updated. Updating timestamp.
I1109 00:29:43.278171  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.725403ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:43.378817  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.331414ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:43.478323  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.002614ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:43.578629  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.20531ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:43.678573  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.068968ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:43.778814  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.174387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:43.878380  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.936835ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:43.947507  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:43.947559  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:43.948061  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:43.948158  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:43.948485  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:43.949079  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:43.978588  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.235446ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:44.078367  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.915706ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:44.152197  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:44.179434  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.97222ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:44.196999  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.316729ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:44.202140  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (3.516561ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:44.278907  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.443889ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:44.378989  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.473023ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:44.478780  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.293636ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:44.579046  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.3657ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:44.683910  110118 httplog.go:90] GET /api/v1/nodes/node-2: (7.240073ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:44.781696  110118 httplog.go:90] GET /api/v1/nodes/node-2: (5.149098ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:44.878718  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.201247ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:44.947738  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:44.948052  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:44.948298  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:44.948318  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:44.948658  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:44.949225  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:44.979002  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.531553ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:45.078687  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.227194ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:45.152463  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:45.178846  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.313938ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:45.278516  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.175481ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:45.378695  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.240331ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:45.478882  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.475863ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:45.578986  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.420191ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:45.679183  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.72091ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:45.778668  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.154449ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:45.878532  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.964285ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:45.947908  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:45.948207  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:45.948510  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:45.948515  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:45.948875  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:45.949423  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:45.978324  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.97019ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:46.079280  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.621755ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:46.152683  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:46.179101  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.609477ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:46.202078  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (4.044321ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:46.205890  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.902836ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:46.278645  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.244974ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:46.378355  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.894962ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:46.478769  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.285142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:46.578718  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.050142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:46.678924  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.368204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:46.780052  110118 httplog.go:90] GET /api/v1/nodes/node-2: (3.54921ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:46.878949  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.545016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:46.948149  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:46.948358  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:46.948704  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:46.948706  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:46.949304  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:46.949556  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:46.978580  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.053513ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.078311  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.845016ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.152911  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:47.178701  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.239348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.278491  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.076063ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.378621  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.155461ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.478084  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.791608ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.578200  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.748828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.678722  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.171112ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.754835  110118 httplog.go:90] GET /api/v1/namespaces/default: (1.705563ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.756872  110118 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.488354ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.759425  110118 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.752259ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.762439  110118 httplog.go:90] GET /apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes: (2.266004ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.778822  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.354306ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.884909  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.453996ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:47.948439  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:47.948536  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:47.948895  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:47.948907  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:47.949462  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:47.949734  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:47.978502  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.025951ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:48.078553  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.105357ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:48.153175  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:48.178709  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.227454ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:48.205769  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (2.279817ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:48.209368  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.570567ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:48.255559  110118 node_lifecycle_controller.go:1092] Node node-0 ReadyCondition updated. Updating timestamp.
I1109 00:29:48.255747  110118 node_lifecycle_controller.go:1092] Node node-1 ReadyCondition updated. Updating timestamp.
I1109 00:29:48.255790  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 20.008829769s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1109 00:29:48.255919  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 20.008957758s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:48.255995  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 20.009035123s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:48.256116  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 20.009155741s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:48.278656  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.214391ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:48.378612  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.102756ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:48.478222  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.819387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:48.578562  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.121525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:48.678753  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.169297ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:48.779386  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.904276ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:48.879231  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.567295ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:48.948663  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:48.948714  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:48.949097  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:48.949099  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:48.949638  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:48.949927  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:48.978034  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.698715ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:49.078981  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.385224ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:49.153430  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:49.179592  110118 httplog.go:90] GET /api/v1/nodes/node-2: (3.189576ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:49.278641  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.156944ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:49.378612  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.117087ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:49.478757  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.350717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:49.578198  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.894722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:49.679031  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.650886ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:49.778549  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.166453ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:49.878542  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.119551ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:49.948841  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:49.948904  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:49.949327  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:49.949402  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:49.949787  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:49.950111  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:49.978454  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.056834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:50.078589  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.138811ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:50.153849  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:50.178895  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.147596ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:50.209653  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.057746ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:50.212428  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.383445ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:50.279089  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.647555ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:50.380195  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.02204ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:50.478530  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.011936ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:50.578707  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.23716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:50.678586  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.101722ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:50.778341  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.016024ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:50.878944  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.512184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:50.949040  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:50.949098  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:50.949530  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:50.949589  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:50.949941  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:50.950301  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:50.980866  110118 httplog.go:90] GET /api/v1/nodes/node-2: (4.53226ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:51.078387  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.930468ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:51.154124  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:51.178670  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.191348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:51.279058  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.463814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:51.378412  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.005525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:51.478510  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.998335ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:51.578145  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.77716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:51.678175  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.826075ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:51.778077  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.741212ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:51.878439  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.032329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:51.949271  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:51.949316  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:51.949767  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:51.949768  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:51.950089  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:51.950514  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:51.978437  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.985633ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:52.078923  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.573834ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:52.154432  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:52.178681  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.26961ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:52.214446  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.665432ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:52.215808  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.73046ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:52.278479  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.043175ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:52.378918  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.333367ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:52.478573  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.294249ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:52.579631  110118 httplog.go:90] GET /api/v1/nodes/node-2: (3.075197ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:52.678153  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.753201ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:52.778172  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.828107ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:52.878588  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.101999ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:52.949499  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:52.949540  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:52.949999  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:52.950002  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:52.950254  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:52.950709  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:52.978632  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.205124ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:53.079147  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.735799ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:53.154827  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:53.178734  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.26168ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:53.256550  110118 node_lifecycle_controller.go:1092] Node node-0 ReadyCondition updated. Updating timestamp.
I1109 00:29:53.256671  110118 node_lifecycle_controller.go:1092] Node node-1 ReadyCondition updated. Updating timestamp.
I1109 00:29:53.256709  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 25.009748962s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1109 00:29:53.256774  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 25.009814743s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:53.256798  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 25.009838392s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:53.256815  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 25.009857364s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:53.278351  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.834814ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:53.378451  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.096934ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:53.478689  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.471804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:53.579006  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.531463ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:53.678213  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.818804ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:53.779076  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.367142ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:53.878339  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.957344ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:53.949777  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:53.950006  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:53.950152  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:53.950161  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:53.950484  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:53.950869  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:53.978436  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.9488ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:54.078005  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.586329ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:54.155043  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:54.178094  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.753237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:54.218609  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.086387ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:54.219140  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (2.346981ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:54.278682  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.167892ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:54.378875  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.356429ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:54.478566  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.154045ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:54.579155  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.669066ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:54.679045  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.94514ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:54.778445  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.986421ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:54.878390  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.960635ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:54.949929  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:54.950156  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:54.950350  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:54.950385  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:54.950619  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:54.951047  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:54.978178  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.790525ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:55.078629  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.085717ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:55.155268  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:55.178077  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.623908ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:55.278597  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.258653ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:55.378206  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.893774ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:55.478395  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.08734ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:55.578775  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.266536ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:55.678486  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.16828ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:55.778465  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.962711ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:55.878272  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.892665ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:55.950148  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:55.950335  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:55.950535  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:55.950695  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:55.950704  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:55.951210  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:55.978271  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.883455ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:56.078197  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.768858ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:56.155694  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:56.178752  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.391931ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:56.222975  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (3.08609ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:56.225092  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (5.61794ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:56.278523  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.088356ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:56.378756  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.339428ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:56.478468  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.07719ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:56.578636  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.989462ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:56.679037  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.585348ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:56.778564  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.154988ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:56.878726  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.171193ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:56.950377  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:56.950486  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:56.950686  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:56.950806  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:56.950880  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:56.951459  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:56.979219  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.710456ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.078957  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.381009ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.155925  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:57.178833  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.222149ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.278616  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.052959ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.381111  110118 httplog.go:90] GET /api/v1/nodes/node-2: (4.733184ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.478451  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.086909ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.578484  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.02926ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.678842  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.380328ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.755731  110118 httplog.go:90] GET /api/v1/namespaces/default: (2.254438ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.758180  110118 httplog.go:90] GET /api/v1/namespaces/default/services/kubernetes: (1.803251ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.760081  110118 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.417056ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.761921  110118 httplog.go:90] GET /apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes: (1.369739ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.778785  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.226293ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.882631  110118 httplog.go:90] GET /api/v1/nodes/node-2: (3.265519ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:57.950585  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:57.950660  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:57.950897  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:57.951013  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:57.951086  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:57.951669  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:57.978377  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.958781ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:58.078603  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.183802ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:58.156233  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:58.178851  110118 httplog.go:90] GET /api/v1/nodes/node-2: (2.424109ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:58.227129  110118 httplog.go:90] PUT /api/v1/nodes/node-1/status: (3.121294ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42096]
I1109 00:29:58.229092  110118 httplog.go:90] PUT /api/v1/nodes/node-0/status: (3.045265ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:58.257159  110118 node_lifecycle_controller.go:1092] Node node-0 ReadyCondition updated. Updating timestamp.
I1109 00:29:58.257232  110118 node_lifecycle_controller.go:1092] Node node-1 ReadyCondition updated. Updating timestamp.
I1109 00:29:58.257304  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 30.010340298s. Last Ready is: &NodeCondition{Type:Ready,Status:Unknown,LastHeartbeatTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusUnknown,Message:Kubelet stopped posting node status.,}
I1109 00:29:58.257350  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 30.010392175s. Last MemoryPressure is: &NodeCondition{Type:MemoryPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:58.257371  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 30.010411495s. Last DiskPressure is: &NodeCondition{Type:DiskPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:58.257396  110118 node_lifecycle_controller.go:1137] node node-2 hasn't been updated for 30.010437002s. Last PIDPressure is: &NodeCondition{Type:PIDPressure,Status:Unknown,LastHeartbeatTime:2019-11-09 00:29:28 +0000 UTC,LastTransitionTime:2019-11-09 00:29:33 +0000 UTC,Reason:NodeStatusNeverUpdated,Message:Kubelet never posted node status.,}
I1109 00:29:58.278502  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.906505ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:58.281063  110118 httplog.go:90] GET /api/v1/nodes/node-2: (1.881923ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
Nov  9 00:29:58.281: INFO: Waiting up to 15s for pod "testpod-0" in namespace "taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847" to be "updated with tolerationSeconds of 200"
I1109 00:29:58.284417  110118 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/pods/testpod-0: (2.121041ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
Nov  9 00:29:58.284: INFO: Pod "testpod-0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.83462ms
Nov  9 00:29:58.284: INFO: Pod "testpod-0" satisfied condition "updated with tolerationSeconds of 200"
I1109 00:29:58.290550  110118 httplog.go:90] DELETE /api/v1/namespaces/taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/pods/testpod-0: (5.480716ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:58.291215  110118 taint_manager.go:383] Noticed pod deletion: types.NamespacedName{Namespace:"taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847", Name:"testpod-0"}
I1109 00:29:58.293940  110118 httplog.go:90] GET /api/v1/namespaces/taint-based-evictionsa1f7ef65-0fbf-4022-9bf2-eff1cc40c847/pods/testpod-0: (1.387358ms) 404 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:58.300942  110118 node_tree.go:100] Removed node "node-0" in group "region1:\x00:zone1" from NodeTree
I1109 00:29:58.300989  110118 taint_manager.go:422] Noticed node deletion: "node-0"
I1109 00:29:58.306480  110118 taint_manager.go:422] Noticed node deletion: "node-1"
I1109 00:29:58.306405  110118 node_tree.go:100] Removed node "node-1" in group "region1:\x00:zone1" from NodeTree
I1109 00:29:58.309427  110118 httplog.go:90] DELETE /api/v1/nodes: (14.8991ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:58.310187  110118 node_tree.go:100] Removed node "node-2" in group "region1:\x00:zone1" from NodeTree
I1109 00:29:58.310279  110118 taint_manager.go:422] Noticed node deletion: "node-2"
I1109 00:29:58.950811  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:58.950875  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:58.951287  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:58.951305  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:58.951331  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:58.951883  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
E1109 00:29:59.020849  110118 factory.go:682] Error getting pod allocatable5de597eb-755b-4b67-bc0c-d94fd5d7c260/pod-test-allocatable2 for retry: Get http://127.0.0.1:34327/api/v1/namespaces/allocatable5de597eb-755b-4b67-bc0c-d94fd5d7c260/pods/pod-test-allocatable2: dial tcp 127.0.0.1:34327: connect: connection refused; retrying...
I1109 00:29:59.156519  110118 reflector.go:278] k8s.io/client-go/informers/factory.go:135: forcing resync
I1109 00:29:59.310192  110118 node_lifecycle_controller.go:601] Shutting down node controller
I1109 00:29:59.310810  110118 httplog.go:90] GET /api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=56023&timeout=8m1s&timeoutSeconds=481&watch=true: (31.366627355s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41988]
I1109 00:29:59.310906  110118 httplog.go:90] GET /apis/apps/v1/daemonsets?allowWatchBookmarks=true&resourceVersion=56023&timeout=5m18s&timeoutSeconds=318&watch=true: (31.162120379s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42064]
I1109 00:29:59.310914  110118 httplog.go:90] GET /api/v1/replicationcontrollers?allowWatchBookmarks=true&resourceVersion=56023&timeout=6m4s&timeoutSeconds=364&watch=true: (31.366831264s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41980]
I1109 00:29:59.310815  110118 httplog.go:90] GET /apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=56023&timeout=6m24s&timeoutSeconds=384&watch=true: (31.365380303s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41998]
I1109 00:29:59.310815  110118 httplog.go:90] GET /apis/policy/v1beta1/poddisruptionbudgets?allowWatchBookmarks=true&resourceVersion=56023&timeout=5m48s&timeoutSeconds=348&watch=true: (31.364765382s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41996]
I1109 00:29:59.311050  110118 httplog.go:90] GET /apis/storage.k8s.io/v1beta1/csinodes?allowWatchBookmarks=true&resourceVersion=56023&timeout=5m53s&timeoutSeconds=353&watch=true: (31.366904799s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41986]
I1109 00:29:59.311085  110118 httplog.go:90] GET /apis/coordination.k8s.io/v1/leases?allowWatchBookmarks=true&resourceVersion=56023&timeout=9m55s&timeoutSeconds=595&watch=true: (31.161523734s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42066]
I1109 00:29:59.311142  110118 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&fieldSelector=status.phase%21%3DFailed%2Cstatus.phase%21%3DSucceeded&resourceVersion=56023&timeoutSeconds=355&watch=true: (31.467632434s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41622]
I1109 00:29:59.311164  110118 httplog.go:90] GET /api/v1/services?allowWatchBookmarks=true&resourceVersion=56318&timeout=6m49s&timeoutSeconds=409&watch=true: (31.366272641s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41994]
I1109 00:29:59.311192  110118 httplog.go:90] GET /api/v1/pods?allowWatchBookmarks=true&resourceVersion=56023&timeout=8m1s&timeoutSeconds=481&watch=true: (31.162436459s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42060]
I1109 00:29:59.311230  110118 httplog.go:90] GET /apis/apps/v1/statefulsets?allowWatchBookmarks=true&resourceVersion=56023&timeout=8m26s&timeoutSeconds=506&watch=true: (31.366803496s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41984]
I1109 00:29:59.310846  110118 httplog.go:90] GET /api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=56023&timeout=9m9s&timeoutSeconds=549&watch=true: (31.364793927s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42000]
I1109 00:29:59.311475  110118 httplog.go:90] GET /api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=56331&timeout=8m30s&timeoutSeconds=510&watch=true: (31.262655513s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42056]
I1109 00:29:59.311324  110118 httplog.go:90] GET /apis/apps/v1/replicasets?allowWatchBookmarks=true&resourceVersion=56023&timeout=6m58s&timeoutSeconds=418&watch=true: (31.366739542s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41982]
I1109 00:29:59.311328  110118 httplog.go:90] GET /api/v1/nodes?allowWatchBookmarks=true&resourceVersion=56023&timeout=8m16s&timeoutSeconds=496&watch=true: (31.366746068s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41992]
I1109 00:29:59.312782  110118 httplog.go:90] DELETE /api/v1/nodes: (1.425237ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:59.312990  110118 controller.go:180] Shutting down kubernetes service endpoint reconciler
I1109 00:29:59.314951  110118 httplog.go:90] GET /api/v1/namespaces/default/endpoints/kubernetes: (1.728409ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:59.317986  110118 httplog.go:90] PUT /api/v1/namespaces/default/endpoints/kubernetes: (2.492627ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:59.319507  110118 httplog.go:90] GET /apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes: (1.032326ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:59.322525  110118 httplog.go:90] PUT /apis/discovery.k8s.io/v1beta1/namespaces/default/endpointslices/kubernetes: (2.628494ms) 200 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:42068]
I1109 00:29:59.323047  110118 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I1109 00:29:59.323298  110118 httplog.go:90] GET /api/v1/namespaces/kube-system/configmaps?allowWatchBookmarks=true&resourceVersion=56023&timeout=8m11s&timeoutSeconds=491&watch=true: (34.881982997s) 0 [scheduler.test/v0.0.0 (linux/amd64) kubernetes/$Format 127.0.0.1:41618]
    --- FAIL: TestTaintBasedEvictions/Taint_based_evictions_for_NodeNotReady_and_200_tolerationseconds (35.06s)
        taint_test.go:808: Failed to taint node in test 0 <node-2>, err: timed out waiting for the condition

				from junit_304dbea7698c16157bb4586f231ea1f94495b046_20191109-001841.xml

Find permit-plugins9bb06b34-0385-4607-88a7-8da8fb12ea45/test-pod mentions in log files | View test history on testgrid


Show 2895 Passed Tests

Show 4 Skipped Tests