This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtnozicka: #50102 Task 3: Until, backed by retry watcher
ResultFAILURE
Tests 1 failed / 620 succeeded
Started2019-02-11 15:15
Elapsed27m26s
Revision
Buildergke-prow-containerd-pool-99179761-5mt4
Refs master:836db5c9
67350:01874876
podc4039701-2e0f-11e9-8746-0a580a6c0714
infra-commit0e19c7061
podc4039701-2e0f-11e9-8746-0a580a6c0714
repok8s.io/kubernetes
repo-commitab8071f58364f671567ac5dd9350a78d57a86a7a
repos{u'k8s.io/kubernetes': u'master:836db5c90e5706b0418091eb52f26ca3a01a7eee,67350:01874876bfe3fa70b32e8039b5ccfb98b0e59374'}

Test Failures


k8s.io/kubernetes/test/integration/apimachinery [build failed] 0.00s

k8s.io/kubernetes/test/integration/apimachinery [build failed]
from junit_642613dbe8fbf016c1770a7007e34bb12666c617_20190211-153047.xml

Show 620 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 313 lines ...
W0211 15:25:15.952] I0211 15:25:15.951529   54036 serving.go:311] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0211 15:25:15.952] I0211 15:25:15.951606   54036 server.go:561] external host was not specified, using 172.17.0.2
W0211 15:25:15.952] W0211 15:25:15.951616   54036 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0211 15:25:15.953] I0211 15:25:15.951818   54036 server.go:146] Version: v1.14.0-alpha.2.523+ab8071f58364f6
W0211 15:25:16.579] I0211 15:25:16.578826   54036 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 15:25:16.580] I0211 15:25:16.578886   54036 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 15:25:16.580] E0211 15:25:16.579372   54036 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 15:25:16.580] E0211 15:25:16.579393   54036 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 15:25:16.580] E0211 15:25:16.579435   54036 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 15:25:16.581] E0211 15:25:16.579467   54036 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 15:25:16.581] E0211 15:25:16.579486   54036 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 15:25:16.581] E0211 15:25:16.579503   54036 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 15:25:16.581] I0211 15:25:16.579518   54036 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 15:25:16.582] I0211 15:25:16.579527   54036 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 15:25:16.582] I0211 15:25:16.580822   54036 clientconn.go:551] parsed scheme: ""
W0211 15:25:16.582] I0211 15:25:16.580848   54036 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 15:25:16.582] I0211 15:25:16.580894   54036 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 15:25:16.582] I0211 15:25:16.580971   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 335 lines ...
W0211 15:25:16.925] W0211 15:25:16.924743   54036 genericapiserver.go:330] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0211 15:25:17.578] I0211 15:25:17.577775   54036 clientconn.go:551] parsed scheme: ""
W0211 15:25:17.578] I0211 15:25:17.577820   54036 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 15:25:17.579] I0211 15:25:17.577885   54036 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 15:25:17.579] I0211 15:25:17.577980   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:25:17.579] I0211 15:25:17.578473   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:25:17.794] E0211 15:25:17.793541   54036 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 15:25:17.794] E0211 15:25:17.793595   54036 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 15:25:17.794] E0211 15:25:17.793706   54036 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 15:25:17.794] E0211 15:25:17.793813   54036 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 15:25:17.795] E0211 15:25:17.793846   54036 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 15:25:17.795] E0211 15:25:17.793884   54036 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 15:25:17.795] I0211 15:25:17.793912   54036 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 15:25:17.795] I0211 15:25:17.793918   54036 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 15:25:17.796] I0211 15:25:17.795713   54036 clientconn.go:551] parsed scheme: ""
W0211 15:25:17.796] I0211 15:25:17.795734   54036 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 15:25:17.796] I0211 15:25:17.795771   54036 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 15:25:17.796] I0211 15:25:17.795810   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 164 lines ...
W0211 15:25:53.274] I0211 15:25:53.029937   57381 garbagecollector.go:130] Starting garbage collector controller
W0211 15:25:53.274] I0211 15:25:53.029982   57381 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 15:25:53.274] I0211 15:25:53.029955   57381 controllermanager.go:493] Started "garbagecollector"
W0211 15:25:53.274] I0211 15:25:53.030024   57381 graph_builder.go:308] GraphBuilder running
W0211 15:25:53.275] W0211 15:25:53.030038   57381 controllermanager.go:472] "bootstrapsigner" is disabled
W0211 15:25:53.275] W0211 15:25:53.030054   57381 controllermanager.go:485] Skipping "nodeipam"
W0211 15:25:53.275] E0211 15:25:53.030848   57381 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0211 15:25:53.275] W0211 15:25:53.030893   57381 controllermanager.go:485] Skipping "service"
W0211 15:25:53.275] I0211 15:25:53.031492   57381 controllermanager.go:493] Started "pv-protection"
W0211 15:25:53.276] I0211 15:25:53.031599   57381 pv_protection_controller.go:81] Starting PV protection controller
W0211 15:25:53.276] I0211 15:25:53.031621   57381 controller_utils.go:1021] Waiting for caches to sync for PV protection controller
W0211 15:25:53.276] I0211 15:25:53.032255   57381 controllermanager.go:493] Started "podgc"
W0211 15:25:53.276] I0211 15:25:53.032454   57381 gc_controller.go:76] Starting GC controller
... skipping 2 lines ...
W0211 15:25:53.277] I0211 15:25:53.035067   57381 controllermanager.go:493] Started "ttl"
W0211 15:25:53.277] I0211 15:25:53.035308   57381 daemon_controller.go:267] Starting daemon sets controller
W0211 15:25:53.277] I0211 15:25:53.035361   57381 controller_utils.go:1021] Waiting for caches to sync for daemon sets controller
W0211 15:25:53.277] I0211 15:25:53.035511   57381 ttl_controller.go:116] Starting TTL controller
W0211 15:25:53.277] I0211 15:25:53.035541   57381 controller_utils.go:1021] Waiting for caches to sync for TTL controller
W0211 15:25:53.277] I0211 15:25:53.035981   57381 node_lifecycle_controller.go:77] Sending events to api server
W0211 15:25:53.278] E0211 15:25:53.036295   57381 core.go:162] failed to start cloud node lifecycle controller: no cloud provider provided
W0211 15:25:53.278] W0211 15:25:53.036310   57381 controllermanager.go:485] Skipping "cloud-node-lifecycle"
W0211 15:25:53.278] I0211 15:25:53.036669   57381 controllermanager.go:493] Started "persistentvolume-expander"
W0211 15:25:53.278] I0211 15:25:53.036804   57381 expand_controller.go:153] Starting expand controller
W0211 15:25:53.278] I0211 15:25:53.036829   57381 controller_utils.go:1021] Waiting for caches to sync for expand controller
W0211 15:25:53.278] I0211 15:25:53.037116   57381 controllermanager.go:493] Started "job"
W0211 15:25:53.278] I0211 15:25:53.037278   57381 job_controller.go:143] Starting job controller
W0211 15:25:53.279] I0211 15:25:53.037297   57381 controller_utils.go:1021] Waiting for caches to sync for job controller
W0211 15:25:53.279] E0211 15:25:53.037527   57381 prometheus.go:138] failed to register depth metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_depth", help: "(Deprecated) Current depth of workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_depth" is not a valid metric name
W0211 15:25:53.279] E0211 15:25:53.037549   57381 prometheus.go:150] failed to register adds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_adds", help: "(Deprecated) Total number of adds handled by workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_adds" is not a valid metric name
W0211 15:25:53.280] E0211 15:25:53.037601   57381 prometheus.go:162] failed to register latency metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_queue_latency", help: "(Deprecated) How long an item stays in workqueuedisruption-recheck before being requested.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_queue_latency" is not a valid metric name
W0211 15:25:53.280] E0211 15:25:53.037682   57381 prometheus.go:174] failed to register work_duration metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_work_duration", help: "(Deprecated) How long processing an item from workqueuedisruption-recheck takes.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_work_duration" is not a valid metric name
W0211 15:25:53.280] E0211 15:25:53.037707   57381 prometheus.go:189] failed to register unfinished_work_seconds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_unfinished_work_seconds", help: "(Deprecated) How many seconds of work disruption-recheck has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_unfinished_work_seconds" is not a valid metric name
W0211 15:25:53.281] E0211 15:25:53.037729   57381 prometheus.go:202] failed to register longest_running_processor_microseconds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for disruption-recheck been running.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_longest_running_processor_microseconds" is not a valid metric name
W0211 15:25:53.281] E0211 15:25:53.037757   57381 prometheus.go:214] failed to register retries metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_retries", help: "(Deprecated) Total number of retries handled by workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_retries" is not a valid metric name
W0211 15:25:53.281] I0211 15:25:53.037807   57381 controllermanager.go:493] Started "disruption"
W0211 15:25:53.281] W0211 15:25:53.037823   57381 controllermanager.go:485] Skipping "csrsigning"
W0211 15:25:53.281] I0211 15:25:53.037916   57381 disruption.go:286] Starting disruption controller
W0211 15:25:53.282] I0211 15:25:53.037937   57381 controller_utils.go:1021] Waiting for caches to sync for disruption controller
W0211 15:25:53.282] I0211 15:25:53.038021   57381 controllermanager.go:493] Started "csrcleaner"
W0211 15:25:53.282] I0211 15:25:53.038062   57381 cleaner.go:81] Starting CSR cleaner controller
... skipping 35 lines ...
W0211 15:25:53.288] I0211 15:25:53.095421   57381 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
W0211 15:25:53.288] I0211 15:25:53.095458   57381 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
W0211 15:25:53.288] I0211 15:25:53.095493   57381 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
W0211 15:25:53.289] I0211 15:25:53.095550   57381 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
W0211 15:25:53.289] I0211 15:25:53.095592   57381 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
W0211 15:25:53.289] I0211 15:25:53.095637   57381 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0211 15:25:53.289] E0211 15:25:53.095895   57381 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0211 15:25:53.289] I0211 15:25:53.095930   57381 controllermanager.go:493] Started "resourcequota"
W0211 15:25:53.290] I0211 15:25:53.095979   57381 resource_quota_controller.go:276] Starting resource quota controller
W0211 15:25:53.290] I0211 15:25:53.096022   57381 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0211 15:25:53.290] I0211 15:25:53.096068   57381 resource_quota_monitor.go:301] QuotaMonitor running
W0211 15:25:53.290] I0211 15:25:53.096717   57381 controllermanager.go:493] Started "replicaset"
W0211 15:25:53.290] I0211 15:25:53.096831   57381 replica_set.go:182] Starting replicaset controller
... skipping 24 lines ...
W0211 15:25:53.294] I0211 15:25:53.110026   57381 controller_utils.go:1021] Waiting for caches to sync for deployment controller
W0211 15:25:53.294] I0211 15:25:53.109959   57381 replica_set.go:182] Starting replicationcontroller controller
W0211 15:25:53.294] I0211 15:25:53.110034   57381 controller_utils.go:1021] Waiting for caches to sync for ReplicationController controller
W0211 15:25:53.295] I0211 15:25:53.132740   57381 controller_utils.go:1028] Caches are synced for GC controller
W0211 15:25:53.295] I0211 15:25:53.137910   57381 controller_utils.go:1028] Caches are synced for job controller
W0211 15:25:53.295] I0211 15:25:53.146130   57381 controller_utils.go:1028] Caches are synced for PVC protection controller
W0211 15:25:53.295] W0211 15:25:53.146771   57381 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0211 15:25:53.295] I0211 15:25:53.197114   57381 controller_utils.go:1028] Caches are synced for ReplicaSet controller
W0211 15:25:53.295] I0211 15:25:53.198390   57381 controller_utils.go:1028] Caches are synced for certificate controller
W0211 15:25:53.295] I0211 15:25:53.210195   57381 controller_utils.go:1028] Caches are synced for ReplicationController controller
W0211 15:25:53.295] I0211 15:25:53.210226   57381 controller_utils.go:1028] Caches are synced for deployment controller
W0211 15:25:53.296] I0211 15:25:53.210241   57381 controller_utils.go:1028] Caches are synced for namespace controller
W0211 15:25:53.296] I0211 15:25:53.224935   57381 controller_utils.go:1028] Caches are synced for ClusterRoleAggregator controller
W0211 15:25:53.296] I0211 15:25:53.226949   57381 controller_utils.go:1028] Caches are synced for service account controller
W0211 15:25:53.296] I0211 15:25:53.228838   54036 controller.go:606] quota admission added evaluator for: serviceaccounts
W0211 15:25:53.296] I0211 15:25:53.231897   57381 controller_utils.go:1028] Caches are synced for PV protection controller
W0211 15:25:53.296] E0211 15:25:53.233274   57381 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0211 15:25:53.297] E0211 15:25:53.233365   57381 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0211 15:25:53.297] I0211 15:25:53.235714   57381 controller_utils.go:1028] Caches are synced for TTL controller
W0211 15:25:53.297] I0211 15:25:53.236967   57381 controller_utils.go:1028] Caches are synced for expand controller
W0211 15:25:53.297] I0211 15:25:53.238074   57381 controller_utils.go:1028] Caches are synced for disruption controller
W0211 15:25:53.297] I0211 15:25:53.238094   57381 disruption.go:294] Sending events to api server.
W0211 15:25:53.298] E0211 15:25:53.239016   57381 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0211 15:25:53.298] I0211 15:25:53.239768   57381 controller_utils.go:1028] Caches are synced for attach detach controller
W0211 15:25:53.298] I0211 15:25:53.241544   57381 controller_utils.go:1028] Caches are synced for persistent volume controller
W0211 15:25:53.410] I0211 15:25:53.410226   57381 controller_utils.go:1028] Caches are synced for stateful set controller
W0211 15:25:53.436] I0211 15:25:53.435771   57381 controller_utils.go:1028] Caches are synced for daemon sets controller
W0211 15:25:53.441] I0211 15:25:53.440770   57381 controller_utils.go:1028] Caches are synced for taint controller
W0211 15:25:53.441] I0211 15:25:53.440920   57381 node_lifecycle_controller.go:1113] Initializing eviction metric for zone: 
... skipping 61 lines ...
I0211 15:25:55.069] +++ working dir: /go/src/k8s.io/kubernetes
I0211 15:25:55.070] +++ command: run_kubectl_local_proxy_tests
I0211 15:25:55.078] +++ [0211 15:25:55] Testing kubectl local proxy
I0211 15:25:55.082] +++ [0211 15:25:55] Starting kubectl proxy on random port; output file in proxy-port.out.pU4HM; args: 
W0211 15:25:55.183] I0211 15:25:54.527184   57381 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 15:25:55.183] I0211 15:25:54.627463   57381 controller_utils.go:1028] Caches are synced for garbage collector controller
W0211 15:25:55.183] E0211 15:25:54.644692   57381 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0211 15:25:55.588] +++ [0211 15:25:55] Attempt 0 to read proxy-port.out.pU4HM...
I0211 15:25:55.593] +++ [0211 15:25:55] kubectl proxy running on port 43111
I0211 15:25:55.612] +++ [0211 15:25:55] On try 1, kubectl proxy: ok
I0211 15:25:55.712] +++ [0211 15:25:55] Stopping proxy on port 43111
I0211 15:25:55.719] +++ [0211 15:25:55] Starting kubectl proxy on random port; output file in proxy-port.out.xVgBZ; args: 
W0211 15:25:55.819] /go/src/k8s.io/kubernetes/hack/lib/logging.sh: line 166: 58032 Terminated              kubectl proxy --port=0 --www=. > ${PROXY_PORT_FILE} 2>&1
... skipping 15 lines ...
I0211 15:25:56.979] +++ working dir: /go/src/k8s.io/kubernetes
I0211 15:25:56.981] +++ command: run_RESTMapper_evaluation_tests
I0211 15:25:56.991] +++ [0211 15:25:56] Creating namespace namespace-1549898756-27
I0211 15:25:57.058] namespace/namespace-1549898756-27 created
I0211 15:25:57.120] Context "test" modified.
I0211 15:25:57.127] +++ [0211 15:25:57] Testing RESTMapper
I0211 15:25:57.236] +++ [0211 15:25:57] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0211 15:25:57.249] +++ exit code: 0
I0211 15:25:57.349] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0211 15:25:57.349] bindings                                                                      true         Binding
I0211 15:25:57.350] componentstatuses                 cs                                          false        ComponentStatus
I0211 15:25:57.350] configmaps                        cm                                          true         ConfigMap
I0211 15:25:57.350] endpoints                         ep                                          true         Endpoints
... skipping 585 lines ...
I0211 15:26:15.063] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 15:26:15.317] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 15:26:15.447] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 15:26:15.692] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 15:26:15.816] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 15:26:15.935] (Bpod "valid-pod" force deleted
W0211 15:26:16.036] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0211 15:26:16.036] error: setting 'all' parameter but found a non empty selector. 
W0211 15:26:16.037] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 15:26:16.137] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0211 15:26:16.205] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0211 15:26:16.312] (Bnamespace/test-kubectl-describe-pod created
I0211 15:26:16.447] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0211 15:26:16.573] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0211 15:26:17.869] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0211 15:26:17.959] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0211 15:26:18.026] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0211 15:26:18.112] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0211 15:26:18.260] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:26:18.434] (Bpod/env-test-pod created
W0211 15:26:18.534] error: min-available and max-unavailable cannot be both specified
I0211 15:26:18.635] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0211 15:26:18.635] Name:               env-test-pod
I0211 15:26:18.635] Namespace:          test-kubectl-describe-pod
I0211 15:26:18.636] Priority:           0
I0211 15:26:18.636] PriorityClassName:  <none>
I0211 15:26:18.636] Node:               <none>
... skipping 145 lines ...
I0211 15:26:30.260] replicationcontroller "modified" deleted
W0211 15:26:30.361] I0211 15:26:29.916770   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898785-17098", Name:"modified", UID:"679c1bc8-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"374", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-sscn5
I0211 15:26:30.503] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:26:30.666] (Bpod/valid-pod created
I0211 15:26:30.766] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 15:26:30.913] (BSuccessful
I0211 15:26:30.913] message:Error from server: cannot restore map from string
I0211 15:26:30.913] has:cannot restore map from string
I0211 15:26:30.996] Successful
I0211 15:26:30.996] message:pod/valid-pod patched (no change)
I0211 15:26:30.996] has:patched (no change)
I0211 15:26:31.076] pod/valid-pod patched
I0211 15:26:31.166] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I0211 15:26:31.645] (Bpod/valid-pod patched
I0211 15:26:31.736] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0211 15:26:31.808] (Bpod/valid-pod patched
I0211 15:26:31.900] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0211 15:26:32.052] (Bpod/valid-pod patched
I0211 15:26:32.148] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 15:26:32.316] (B+++ [0211 15:26:32] "kubectl patch with resourceVersion 494" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W0211 15:26:32.416] E0211 15:26:30.904680   54036 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0211 15:26:32.548] pod "valid-pod" deleted
I0211 15:26:32.556] pod/valid-pod replaced
I0211 15:26:32.654] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0211 15:26:32.805] (BSuccessful
I0211 15:26:32.805] message:error: --grace-period must have --force specified
I0211 15:26:32.805] has:\-\-grace-period must have \-\-force specified
I0211 15:26:32.972] Successful
I0211 15:26:32.972] message:error: --timeout must have --force specified
I0211 15:26:32.972] has:\-\-timeout must have \-\-force specified
I0211 15:26:33.121] node/node-v1-test created
W0211 15:26:33.222] W0211 15:26:33.121368   57381 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0211 15:26:33.323] node/node-v1-test replaced
I0211 15:26:33.380] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0211 15:26:33.455] (Bnode "node-v1-test" deleted
I0211 15:26:33.554] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 15:26:33.827] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0211 15:26:34.784] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 58 lines ...
I0211 15:26:38.844] (Bpod/test-pod created
W0211 15:26:38.945] I0211 15:26:33.443928   57381 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"6985816b-2e11-11e9-b672-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-v1-test event: Registered Node node-v1-test in Controller
W0211 15:26:38.945] Edit cancelled, no changes made.
W0211 15:26:38.945] Edit cancelled, no changes made.
W0211 15:26:38.945] Edit cancelled, no changes made.
W0211 15:26:38.945] Edit cancelled, no changes made.
W0211 15:26:38.945] error: 'name' already has a value (valid-pod), and --overwrite is false
W0211 15:26:38.946] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 15:26:38.946] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0211 15:26:38.946] I0211 15:26:38.444193   57381 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"6985816b-2e11-11e9-b672-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RemovingNode' Node node-v1-test event: Removing Node node-v1-test from Controller
I0211 15:26:39.046] pod "test-pod" deleted
I0211 15:26:39.047] +++ [0211 15:26:39] Creating namespace namespace-1549898799-7417
I0211 15:26:39.103] namespace/namespace-1549898799-7417 created
... skipping 42 lines ...
I0211 15:26:42.156] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0211 15:26:42.158] +++ working dir: /go/src/k8s.io/kubernetes
I0211 15:26:42.160] +++ command: run_kubectl_create_error_tests
I0211 15:26:42.171] +++ [0211 15:26:42] Creating namespace namespace-1549898802-4042
I0211 15:26:42.238] namespace/namespace-1549898802-4042 created
I0211 15:26:42.304] Context "test" modified.
I0211 15:26:42.310] +++ [0211 15:26:42] Testing kubectl create with error
W0211 15:26:42.411] Error: required flag(s) "filename" not set
W0211 15:26:42.411] 
W0211 15:26:42.411] 
W0211 15:26:42.411] Examples:
W0211 15:26:42.411]   # Create a pod using the data in pod.json.
W0211 15:26:42.411]   kubectl create -f ./pod.json
W0211 15:26:42.412]   
... skipping 38 lines ...
W0211 15:26:42.416]   kubectl create -f FILENAME [options]
W0211 15:26:42.416] 
W0211 15:26:42.416] Use "kubectl <command> --help" for more information about a given command.
W0211 15:26:42.416] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0211 15:26:42.416] 
W0211 15:26:42.416] required flag(s) "filename" not set
I0211 15:26:42.547] +++ [0211 15:26:42] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0211 15:26:42.648] kubectl convert is DEPRECATED and will be removed in a future version.
W0211 15:26:42.648] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0211 15:26:42.749] +++ exit code: 0
I0211 15:26:42.749] Recording: run_kubectl_apply_tests
I0211 15:26:42.749] Running command: run_kubectl_apply_tests
I0211 15:26:42.762] 
... skipping 13 lines ...
I0211 15:26:43.732] apply.sh:47: Successful get deployments {{range.items}}{{.metadata.name}}{{end}}: test-deployment-retainkeys
I0211 15:26:44.585] (Bdeployment.extensions "test-deployment-retainkeys" deleted
I0211 15:26:44.678] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:26:44.836] (Bpod/selector-test-pod created
I0211 15:26:44.930] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0211 15:26:45.011] (BSuccessful
I0211 15:26:45.011] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0211 15:26:45.012] has:pods "selector-test-pod-dont-apply" not found
I0211 15:26:45.086] pod "selector-test-pod" deleted
I0211 15:26:45.175] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:26:45.404] (Bpod/test-pod created (server dry run)
I0211 15:26:45.504] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:26:45.659] (Bpod/test-pod created
... skipping 12 lines ...
W0211 15:26:46.576] I0211 15:26:46.575409   54036 clientconn.go:551] parsed scheme: ""
W0211 15:26:46.576] I0211 15:26:46.575439   54036 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 15:26:46.577] I0211 15:26:46.575469   54036 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 15:26:46.577] I0211 15:26:46.575507   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:26:46.577] I0211 15:26:46.575845   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:26:46.582] I0211 15:26:46.581353   54036 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0211 15:26:46.670] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0211 15:26:46.770] kind.mygroup.example.com/myobj created (server dry run)
I0211 15:26:46.771] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0211 15:26:46.852] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:26:47.023] (Bpod/a created
I0211 15:26:48.329] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0211 15:26:48.410] (BSuccessful
I0211 15:26:48.410] message:Error from server (NotFound): pods "b" not found
I0211 15:26:48.410] has:pods "b" not found
I0211 15:26:48.582] pod/b created
I0211 15:26:48.594] pod/a pruned
I0211 15:26:50.083] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0211 15:26:50.165] (BSuccessful
I0211 15:26:50.166] message:Error from server (NotFound): pods "a" not found
I0211 15:26:50.166] has:pods "a" not found
I0211 15:26:50.240] pod "b" deleted
I0211 15:26:50.330] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:26:50.485] (Bpod/a created
I0211 15:26:50.581] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0211 15:26:50.664] (BSuccessful
I0211 15:26:50.665] message:Error from server (NotFound): pods "b" not found
I0211 15:26:50.665] has:pods "b" not found
I0211 15:26:50.820] pod/b created
I0211 15:26:50.912] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0211 15:26:50.996] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I0211 15:26:51.068] (Bpod "a" deleted
I0211 15:26:51.071] pod "b" deleted
I0211 15:26:51.230] Successful
I0211 15:26:51.230] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0211 15:26:51.230] has:all resources selected for prune without explicitly passing --all
I0211 15:26:51.376] pod/a created
I0211 15:26:51.381] pod/b created
I0211 15:26:51.387] service/prune-svc created
I0211 15:26:52.688] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0211 15:26:52.772] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 127 lines ...
I0211 15:27:04.189] Context "test" modified.
I0211 15:27:04.196] +++ [0211 15:27:04] Testing kubectl create filter
I0211 15:27:04.280] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:27:04.432] (Bpod/selector-test-pod created
I0211 15:27:04.523] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0211 15:27:04.603] (BSuccessful
I0211 15:27:04.604] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0211 15:27:04.604] has:pods "selector-test-pod-dont-apply" not found
I0211 15:27:04.673] pod "selector-test-pod" deleted
I0211 15:27:04.692] +++ exit code: 0
I0211 15:27:04.722] Recording: run_kubectl_apply_deployments_tests
I0211 15:27:04.722] Running command: run_kubectl_apply_deployments_tests
I0211 15:27:04.739] 
... skipping 37 lines ...
W0211 15:27:06.518] I0211 15:27:03.535796   54036 controller.go:606] quota admission added evaluator for: cronjobs.batch
W0211 15:27:06.518] I0211 15:27:05.312672   54036 controller.go:606] quota admission added evaluator for: deployments.extensions
W0211 15:27:06.518] I0211 15:27:05.317611   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898824-24405", Name:"my-depl", UID:"7cb5daa7-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-656cffcbcc to 1
W0211 15:27:06.518] I0211 15:27:05.322340   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898824-24405", Name:"my-depl-656cffcbcc", UID:"7cb658c7-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-656cffcbcc-httw4
W0211 15:27:06.518] I0211 15:27:05.826005   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898824-24405", Name:"my-depl", UID:"7cb5daa7-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-64775887d7 to 1
W0211 15:27:06.519] I0211 15:27:05.827458   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898824-24405", Name:"my-depl-64775887d7", UID:"7d03d6a7-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"674", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-64775887d7-5rk4t
W0211 15:27:06.519] E0211 15:27:06.410185   57381 replica_set.go:450] Sync "namespace-1549898824-24405/my-depl-64775887d7" failed with replicasets.apps "my-depl-64775887d7" not found
W0211 15:27:06.519] E0211 15:27:06.413919   57381 replica_set.go:450] Sync "namespace-1549898824-24405/my-depl-64775887d7" failed with replicasets.apps "my-depl-64775887d7" not found
W0211 15:27:06.519] I0211 15:27:06.430648   54036 controller.go:606] quota admission added evaluator for: replicasets.extensions
I0211 15:27:06.620] apps.sh:137: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:27:06.620] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:27:06.684] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:27:06.764] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:27:06.918] (Bdeployment.extensions/nginx created
I0211 15:27:07.014] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0211 15:27:11.210] (BSuccessful
I0211 15:27:11.211] message:Error from server (Conflict): error when applying patch:
I0211 15:27:11.211] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1549898824-24405\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0211 15:27:11.211] to:
I0211 15:27:11.211] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0211 15:27:11.212] Name: "nginx", Namespace: "namespace-1549898824-24405"
I0211 15:27:11.213] Object: &{map["spec":map["replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log"]] "restartPolicy":"Always"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647)] "status":map["replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["status":"False" "lastUpdateTime":"2019-02-11T15:27:06Z" "lastTransitionTime":"2019-02-11T15:27:06Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available"]] "observedGeneration":'\x01'] "kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["creationTimestamp":"2019-02-11T15:27:06Z" "labels":map["name":"nginx"] "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1549898824-24405\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "uid":"7daa913a-2e11-11e9-b672-0242ac110002" "resourceVersion":"713" "generation":'\x01' "name":"nginx" "namespace":"namespace-1549898824-24405" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1549898824-24405/deployments/nginx"]]}
I0211 15:27:11.213] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0211 15:27:11.214] has:Error from server (Conflict)
W0211 15:27:11.314] I0211 15:27:06.921510   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898824-24405", Name:"nginx", UID:"7daa913a-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-776cc67f78 to 3
W0211 15:27:11.315] I0211 15:27:06.924223   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898824-24405", Name:"nginx-776cc67f78", UID:"7dab1444-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"701", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-947j4
W0211 15:27:11.315] I0211 15:27:06.926479   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898824-24405", Name:"nginx-776cc67f78", UID:"7dab1444-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"701", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-rvltm
W0211 15:27:11.315] I0211 15:27:06.926606   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898824-24405", Name:"nginx-776cc67f78", UID:"7dab1444-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"701", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-t2jsk
I0211 15:27:16.418] deployment.extensions/nginx configured
I0211 15:27:16.511] Successful
... skipping 145 lines ...
I0211 15:27:23.596] +++ [0211 15:27:23] Creating namespace namespace-1549898843-4034
I0211 15:27:23.666] namespace/namespace-1549898843-4034 created
I0211 15:27:23.739] Context "test" modified.
I0211 15:27:23.747] +++ [0211 15:27:23] Testing kubectl get
I0211 15:27:23.833] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:27:23.914] (BSuccessful
I0211 15:27:23.914] message:Error from server (NotFound): pods "abc" not found
I0211 15:27:23.914] has:pods "abc" not found
I0211 15:27:23.999] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:27:24.081] (BSuccessful
I0211 15:27:24.081] message:Error from server (NotFound): pods "abc" not found
I0211 15:27:24.081] has:pods "abc" not found
I0211 15:27:24.167] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:27:24.243] (BSuccessful
I0211 15:27:24.243] message:{
I0211 15:27:24.243]     "apiVersion": "v1",
I0211 15:27:24.243]     "items": [],
... skipping 23 lines ...
I0211 15:27:24.561] has not:No resources found
I0211 15:27:24.642] Successful
I0211 15:27:24.642] message:NAME
I0211 15:27:24.642] has not:No resources found
I0211 15:27:24.730] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:27:24.843] (BSuccessful
I0211 15:27:24.843] message:error: the server doesn't have a resource type "foobar"
I0211 15:27:24.844] has not:No resources found
I0211 15:27:24.921] Successful
I0211 15:27:24.922] message:No resources found.
I0211 15:27:24.922] has:No resources found
I0211 15:27:25.002] Successful
I0211 15:27:25.002] message:
I0211 15:27:25.002] has not:No resources found
I0211 15:27:25.082] Successful
I0211 15:27:25.082] message:No resources found.
I0211 15:27:25.082] has:No resources found
I0211 15:27:25.166] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:27:25.245] (BSuccessful
I0211 15:27:25.245] message:Error from server (NotFound): pods "abc" not found
I0211 15:27:25.245] has:pods "abc" not found
I0211 15:27:25.246] FAIL!
I0211 15:27:25.247] message:Error from server (NotFound): pods "abc" not found
I0211 15:27:25.247] has not:List
I0211 15:27:25.247] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0211 15:27:25.350] Successful
I0211 15:27:25.350] message:I0211 15:27:25.306422   69941 loader.go:359] Config loaded from file /tmp/tmp.5MIdEmpiUz/.kube/config
I0211 15:27:25.350] I0211 15:27:25.307801   69941 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0211 15:27:25.350] I0211 15:27:25.327264   69941 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 653 lines ...
I0211 15:27:28.728] }
I0211 15:27:28.817] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 15:27:29.050] (B<no value>Successful
I0211 15:27:29.051] message:valid-pod:
I0211 15:27:29.051] has:valid-pod:
I0211 15:27:29.129] Successful
I0211 15:27:29.130] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0211 15:27:29.130] 	template was:
I0211 15:27:29.130] 		{.missing}
I0211 15:27:29.130] 	object given to jsonpath engine was:
I0211 15:27:29.131] 		map[string]interface {}{"status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}, "kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"name":"valid-pod", "namespace":"namespace-1549898848-5399", "selfLink":"/api/v1/namespaces/namespace-1549898848-5399/pods/valid-pod", "uid":"8a9b0564-2e11-11e9-b672-0242ac110002", "resourceVersion":"808", "creationTimestamp":"2019-02-11T15:27:28Z", "labels":map[string]interface {}{"name":"valid-pod"}}, "spec":map[string]interface {}{"terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname"}}, "restartPolicy":"Always"}}
I0211 15:27:29.131] has:missing is not found
I0211 15:27:29.210] Successful
I0211 15:27:29.210] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0211 15:27:29.211] 	template was:
I0211 15:27:29.211] 		{{.missing}}
I0211 15:27:29.211] 	raw data was:
I0211 15:27:29.211] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-02-11T15:27:28Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1549898848-5399","resourceVersion":"808","selfLink":"/api/v1/namespaces/namespace-1549898848-5399/pods/valid-pod","uid":"8a9b0564-2e11-11e9-b672-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0211 15:27:29.211] 	object given to template engine was:
I0211 15:27:29.212] 		map[apiVersion:v1 kind:Pod metadata:map[resourceVersion:808 selfLink:/api/v1/namespaces/namespace-1549898848-5399/pods/valid-pod uid:8a9b0564-2e11-11e9-b672-0242ac110002 creationTimestamp:2019-02-11T15:27:28Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1549898848-5399] spec:map[dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname]]] status:map[qosClass:Guaranteed phase:Pending]]
I0211 15:27:29.212] has:map has no entry for key "missing"
W0211 15:27:29.312] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0211 15:27:30.284] E0211 15:27:30.283984   70329 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0211 15:27:30.385] Successful
I0211 15:27:30.385] message:NAME        READY   STATUS    RESTARTS   AGE
I0211 15:27:30.385] valid-pod   0/1     Pending   0          1s
I0211 15:27:30.385] has:STATUS
I0211 15:27:30.385] Successful
... skipping 80 lines ...
I0211 15:27:32.557]   terminationGracePeriodSeconds: 30
I0211 15:27:32.557] status:
I0211 15:27:32.557]   phase: Pending
I0211 15:27:32.557]   qosClass: Guaranteed
I0211 15:27:32.557] has:name: valid-pod
I0211 15:27:32.557] Successful
I0211 15:27:32.557] message:Error from server (NotFound): pods "invalid-pod" not found
I0211 15:27:32.558] has:"invalid-pod" not found
I0211 15:27:32.613] pod "valid-pod" deleted
I0211 15:27:32.701] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:27:32.861] (Bpod/redis-master created
I0211 15:27:32.863] pod/valid-pod created
I0211 15:27:32.955] Successful
... skipping 247 lines ...
I0211 15:27:36.909] Running command: run_create_secret_tests
I0211 15:27:36.928] 
I0211 15:27:36.930] +++ Running case: test-cmd.run_create_secret_tests 
I0211 15:27:36.932] +++ working dir: /go/src/k8s.io/kubernetes
I0211 15:27:36.934] +++ command: run_create_secret_tests
I0211 15:27:37.021] Successful
I0211 15:27:37.022] message:Error from server (NotFound): secrets "mysecret" not found
I0211 15:27:37.022] has:secrets "mysecret" not found
W0211 15:27:37.122] I0211 15:27:36.133202   54036 clientconn.go:551] parsed scheme: ""
W0211 15:27:37.123] I0211 15:27:36.133279   54036 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 15:27:37.123] I0211 15:27:36.133322   54036 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 15:27:37.123] I0211 15:27:36.133380   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:27:37.123] I0211 15:27:36.133948   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:27:37.123] No resources found.
W0211 15:27:37.123] No resources found.
I0211 15:27:37.224] Successful
I0211 15:27:37.224] message:Error from server (NotFound): secrets "mysecret" not found
I0211 15:27:37.224] has:secrets "mysecret" not found
I0211 15:27:37.224] Successful
I0211 15:27:37.224] message:user-specified
I0211 15:27:37.224] has:user-specified
I0211 15:27:37.229] Successful
I0211 15:27:37.296] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"8fc59902-2e11-11e9-b672-0242ac110002","resourceVersion":"883","creationTimestamp":"2019-02-11T15:27:37Z"}}
... skipping 99 lines ...
I0211 15:27:40.061] has:Timeout exceeded while reading body
I0211 15:27:40.139] Successful
I0211 15:27:40.139] message:NAME        READY   STATUS    RESTARTS   AGE
I0211 15:27:40.139] valid-pod   0/1     Pending   0          2s
I0211 15:27:40.139] has:valid-pod
I0211 15:27:40.203] Successful
I0211 15:27:40.203] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0211 15:27:40.203] has:Invalid timeout value
I0211 15:27:40.273] pod "valid-pod" deleted
I0211 15:27:40.292] +++ exit code: 0
I0211 15:27:40.322] Recording: run_crd_tests
I0211 15:27:40.323] Running command: run_crd_tests
I0211 15:27:40.340] 
... skipping 166 lines ...
I0211 15:27:44.397] foo.company.com/test patched
I0211 15:27:44.483] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0211 15:27:44.559] (Bfoo.company.com/test patched
I0211 15:27:44.641] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0211 15:27:44.715] (Bfoo.company.com/test patched
I0211 15:27:44.806] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0211 15:27:44.950] (B+++ [0211 15:27:44] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0211 15:27:45.012] {
I0211 15:27:45.012]     "apiVersion": "company.com/v1",
I0211 15:27:45.012]     "kind": "Foo",
I0211 15:27:45.012]     "metadata": {
I0211 15:27:45.012]         "annotations": {
I0211 15:27:45.013]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 113 lines ...
W0211 15:27:46.469] I0211 15:27:42.840475   54036 controller.go:606] quota admission added evaluator for: foos.company.com
W0211 15:27:46.470] I0211 15:27:46.121820   54036 controller.go:606] quota admission added evaluator for: bars.company.com
W0211 15:27:46.470] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 73114 Killed                  while [ ${tries} -lt 10 ]; do
W0211 15:27:46.470]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0211 15:27:46.470] done
W0211 15:27:46.470] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 73113 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0211 15:27:54.953] E0211 15:27:54.952095   57381 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"]
W0211 15:27:55.088] I0211 15:27:55.087896   57381 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 15:27:55.089] I0211 15:27:55.089003   54036 clientconn.go:551] parsed scheme: ""
W0211 15:27:55.089] I0211 15:27:55.089082   54036 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 15:27:55.090] I0211 15:27:55.089123   54036 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 15:27:55.090] I0211 15:27:55.089412   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:27:55.090] I0211 15:27:55.089835   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 81 lines ...
I0211 15:28:07.029] +++ [0211 15:28:07] Testing cmd with image
I0211 15:28:07.118] Successful
I0211 15:28:07.119] message:deployment.apps/test1 created
I0211 15:28:07.119] has:deployment.apps/test1 created
I0211 15:28:07.191] deployment.extensions "test1" deleted
I0211 15:28:07.264] Successful
I0211 15:28:07.265] message:error: Invalid image name "InvalidImageName": invalid reference format
I0211 15:28:07.265] has:error: Invalid image name "InvalidImageName": invalid reference format
I0211 15:28:07.276] +++ exit code: 0
I0211 15:28:07.314] +++ [0211 15:28:07] Testing recursive resources
I0211 15:28:07.319] +++ [0211 15:28:07] Creating namespace namespace-1549898887-1141
I0211 15:28:07.389] namespace/namespace-1549898887-1141 created
I0211 15:28:07.453] Context "test" modified.
I0211 15:28:07.536] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:28:07.790] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:07.792] (BSuccessful
I0211 15:28:07.792] message:pod/busybox0 created
I0211 15:28:07.792] pod/busybox1 created
I0211 15:28:07.793] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 15:28:07.793] has:error validating data: kind not set
I0211 15:28:07.873] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:08.029] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0211 15:28:08.031] (BSuccessful
I0211 15:28:08.032] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 15:28:08.032] has:Object 'Kind' is missing
I0211 15:28:08.117] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:08.356] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0211 15:28:08.358] (BSuccessful
I0211 15:28:08.358] message:pod/busybox0 replaced
I0211 15:28:08.358] pod/busybox1 replaced
I0211 15:28:08.359] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 15:28:08.359] has:error validating data: kind not set
I0211 15:28:08.438] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:08.530] (BSuccessful
I0211 15:28:08.531] message:Name:               busybox0
I0211 15:28:08.531] Namespace:          namespace-1549898887-1141
I0211 15:28:08.531] Priority:           0
I0211 15:28:08.531] PriorityClassName:  <none>
... skipping 159 lines ...
I0211 15:28:08.550] has:Object 'Kind' is missing
I0211 15:28:08.623] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:08.801] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0211 15:28:08.803] (BSuccessful
I0211 15:28:08.803] message:pod/busybox0 annotated
I0211 15:28:08.803] pod/busybox1 annotated
I0211 15:28:08.803] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 15:28:08.804] has:Object 'Kind' is missing
I0211 15:28:08.889] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:09.144] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0211 15:28:09.146] (BSuccessful
I0211 15:28:09.147] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0211 15:28:09.147] pod/busybox0 configured
I0211 15:28:09.147] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0211 15:28:09.147] pod/busybox1 configured
I0211 15:28:09.147] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 15:28:09.147] has:error validating data: kind not set
I0211 15:28:09.230] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:28:09.379] (Bdeployment.apps/nginx created
W0211 15:28:09.480] Error from server (NotFound): namespaces "non-native-resources" not found
W0211 15:28:09.480] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0211 15:28:09.481] I0211 15:28:07.106785   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898886-6995", Name:"test1", UID:"a18a0d9e-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1002", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-848d5d4b47 to 1
W0211 15:28:09.481] I0211 15:28:07.111041   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898886-6995", Name:"test1-848d5d4b47", UID:"a18a8dfe-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1003", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-848d5d4b47-6k7zp
W0211 15:28:09.481] I0211 15:28:09.382244   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898887-1141", Name:"nginx", UID:"a2e541ab-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1028", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5f7cff5b56 to 3
W0211 15:28:09.481] I0211 15:28:09.385329   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898887-1141", Name:"nginx-5f7cff5b56", UID:"a2e5d835-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-nmbrp
W0211 15:28:09.482] I0211 15:28:09.394119   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898887-1141", Name:"nginx-5f7cff5b56", UID:"a2e5d835-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-9l46h
... skipping 47 lines ...
I0211 15:28:09.818] deployment.extensions "nginx" deleted
I0211 15:28:09.913] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:10.073] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:10.075] (BSuccessful
I0211 15:28:10.075] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0211 15:28:10.076] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0211 15:28:10.076] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 15:28:10.076] has:Object 'Kind' is missing
I0211 15:28:10.158] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:10.238] (BSuccessful
I0211 15:28:10.238] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 15:28:10.239] has:busybox0:busybox1:
I0211 15:28:10.240] Successful
I0211 15:28:10.240] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 15:28:10.240] has:Object 'Kind' is missing
I0211 15:28:10.328] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:10.410] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 15:28:10.497] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0211 15:28:10.500] (BSuccessful
I0211 15:28:10.500] message:pod/busybox0 labeled
I0211 15:28:10.500] pod/busybox1 labeled
I0211 15:28:10.500] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 15:28:10.500] has:Object 'Kind' is missing
I0211 15:28:10.581] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:10.660] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 15:28:10.743] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0211 15:28:10.745] (BSuccessful
I0211 15:28:10.746] message:pod/busybox0 patched
I0211 15:28:10.746] pod/busybox1 patched
I0211 15:28:10.746] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 15:28:10.746] has:Object 'Kind' is missing
I0211 15:28:10.828] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:10.993] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:28:10.995] (BSuccessful
I0211 15:28:10.996] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 15:28:10.996] pod "busybox0" force deleted
I0211 15:28:10.996] pod "busybox1" force deleted
I0211 15:28:10.996] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 15:28:10.996] has:Object 'Kind' is missing
I0211 15:28:11.081] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:28:11.227] (Breplicationcontroller/busybox0 created
I0211 15:28:11.230] replicationcontroller/busybox1 created
I0211 15:28:11.325] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:11.410] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:11.492] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 15:28:11.576] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 15:28:11.744] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0211 15:28:11.826] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0211 15:28:11.828] (BSuccessful
I0211 15:28:11.828] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0211 15:28:11.829] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0211 15:28:11.829] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 15:28:11.829] has:Object 'Kind' is missing
I0211 15:28:11.899] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0211 15:28:11.976] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0211 15:28:12.066] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:12.150] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 15:28:12.229] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 15:28:12.400] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0211 15:28:12.483] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0211 15:28:12.486] (BSuccessful
I0211 15:28:12.486] message:service/busybox0 exposed
I0211 15:28:12.486] service/busybox1 exposed
I0211 15:28:12.486] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 15:28:12.486] has:Object 'Kind' is missing
I0211 15:28:12.573] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:12.653] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 15:28:12.736] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 15:28:12.912] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0211 15:28:12.991] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0211 15:28:12.993] (BSuccessful
I0211 15:28:12.993] message:replicationcontroller/busybox0 scaled
I0211 15:28:12.993] replicationcontroller/busybox1 scaled
I0211 15:28:12.994] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 15:28:12.994] has:Object 'Kind' is missing
I0211 15:28:13.077] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:13.236] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:28:13.239] (BSuccessful
I0211 15:28:13.239] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 15:28:13.239] replicationcontroller "busybox0" force deleted
I0211 15:28:13.239] replicationcontroller "busybox1" force deleted
I0211 15:28:13.239] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 15:28:13.239] has:Object 'Kind' is missing
I0211 15:28:13.324] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:28:13.480] (Bdeployment.apps/nginx1-deployment created
I0211 15:28:13.483] deployment.apps/nginx0-deployment created
I0211 15:28:13.581] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0211 15:28:13.664] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0211 15:28:13.842] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0211 15:28:13.844] (BSuccessful
I0211 15:28:13.845] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0211 15:28:13.845] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0211 15:28:13.845] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 15:28:13.845] has:Object 'Kind' is missing
I0211 15:28:13.923] deployment.apps/nginx1-deployment paused
I0211 15:28:13.926] deployment.apps/nginx0-deployment paused
I0211 15:28:14.018] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0211 15:28:14.021] (BSuccessful
I0211 15:28:14.021] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0211 15:28:14.292] 1         <none>
I0211 15:28:14.292] 
I0211 15:28:14.292] deployment.apps/nginx0-deployment 
I0211 15:28:14.292] REVISION  CHANGE-CAUSE
I0211 15:28:14.292] 1         <none>
I0211 15:28:14.292] 
I0211 15:28:14.293] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 15:28:14.293] has:nginx0-deployment
I0211 15:28:14.293] Successful
I0211 15:28:14.293] message:deployment.apps/nginx1-deployment 
I0211 15:28:14.293] REVISION  CHANGE-CAUSE
I0211 15:28:14.293] 1         <none>
I0211 15:28:14.293] 
I0211 15:28:14.293] deployment.apps/nginx0-deployment 
I0211 15:28:14.293] REVISION  CHANGE-CAUSE
I0211 15:28:14.294] 1         <none>
I0211 15:28:14.294] 
I0211 15:28:14.294] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 15:28:14.294] has:nginx1-deployment
I0211 15:28:14.295] Successful
I0211 15:28:14.295] message:deployment.apps/nginx1-deployment 
I0211 15:28:14.296] REVISION  CHANGE-CAUSE
I0211 15:28:14.296] 1         <none>
I0211 15:28:14.296] 
I0211 15:28:14.296] deployment.apps/nginx0-deployment 
I0211 15:28:14.296] REVISION  CHANGE-CAUSE
I0211 15:28:14.296] 1         <none>
I0211 15:28:14.296] 
I0211 15:28:14.297] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 15:28:14.297] has:Object 'Kind' is missing
I0211 15:28:14.368] deployment.apps "nginx1-deployment" force deleted
I0211 15:28:14.372] deployment.apps "nginx0-deployment" force deleted
W0211 15:28:14.472] kubectl convert is DEPRECATED and will be removed in a future version.
W0211 15:28:14.473] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
W0211 15:28:14.473] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 15:28:14.473] I0211 15:28:11.230109   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898887-1141", Name:"busybox0", UID:"a3ff4b48-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1059", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-2hm72
W0211 15:28:14.473] I0211 15:28:11.232415   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898887-1141", Name:"busybox1", UID:"a3ffd936-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1061", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-ggsbp
W0211 15:28:14.473] I0211 15:28:11.243925   57381 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0211 15:28:14.474] I0211 15:28:12.822036   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898887-1141", Name:"busybox0", UID:"a3ff4b48-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1080", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-gnddf
W0211 15:28:14.474] I0211 15:28:12.828323   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898887-1141", Name:"busybox1", UID:"a3ffd936-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1085", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-b685n
W0211 15:28:14.474] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 15:28:14.474] I0211 15:28:13.483710   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898887-1141", Name:"nginx1-deployment", UID:"a5570823-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1101", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7c76c6cbb8 to 2
W0211 15:28:14.475] I0211 15:28:13.485718   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898887-1141", Name:"nginx0-deployment", UID:"a5578e57-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1102", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-7bb85585d7 to 2
W0211 15:28:14.475] I0211 15:28:13.485755   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898887-1141", Name:"nginx1-deployment-7c76c6cbb8", UID:"a5579320-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1103", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-wv94f
W0211 15:28:14.475] I0211 15:28:13.489071   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898887-1141", Name:"nginx0-deployment-7bb85585d7", UID:"a557f46e-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1104", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-cdqff
W0211 15:28:14.475] I0211 15:28:13.489105   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898887-1141", Name:"nginx1-deployment-7c76c6cbb8", UID:"a5579320-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1103", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-xs2x9
W0211 15:28:14.476] I0211 15:28:13.491259   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898887-1141", Name:"nginx0-deployment-7bb85585d7", UID:"a557f46e-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1104", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-8p6b8
W0211 15:28:14.476] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 15:28:14.476] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 15:28:15.463] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:28:15.609] (Breplicationcontroller/busybox0 created
I0211 15:28:15.613] replicationcontroller/busybox1 created
I0211 15:28:15.709] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 15:28:15.794] (BSuccessful
I0211 15:28:15.794] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0211 15:28:15.797] message:no rollbacker has been implemented for "ReplicationController"
I0211 15:28:15.797] no rollbacker has been implemented for "ReplicationController"
I0211 15:28:15.797] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 15:28:15.797] has:Object 'Kind' is missing
I0211 15:28:15.883] Successful
I0211 15:28:15.883] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 15:28:15.884] error: replicationcontrollers "busybox0" pausing is not supported
I0211 15:28:15.884] error: replicationcontrollers "busybox1" pausing is not supported
I0211 15:28:15.884] has:Object 'Kind' is missing
I0211 15:28:15.885] Successful
I0211 15:28:15.885] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 15:28:15.885] error: replicationcontrollers "busybox0" pausing is not supported
I0211 15:28:15.885] error: replicationcontrollers "busybox1" pausing is not supported
I0211 15:28:15.885] has:replicationcontrollers "busybox0" pausing is not supported
I0211 15:28:15.887] Successful
I0211 15:28:15.887] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 15:28:15.887] error: replicationcontrollers "busybox0" pausing is not supported
I0211 15:28:15.887] error: replicationcontrollers "busybox1" pausing is not supported
I0211 15:28:15.887] has:replicationcontrollers "busybox1" pausing is not supported
I0211 15:28:15.972] Successful
I0211 15:28:15.973] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 15:28:15.973] error: replicationcontrollers "busybox0" resuming is not supported
I0211 15:28:15.973] error: replicationcontrollers "busybox1" resuming is not supported
I0211 15:28:15.973] has:Object 'Kind' is missing
I0211 15:28:15.974] Successful
I0211 15:28:15.975] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 15:28:15.975] error: replicationcontrollers "busybox0" resuming is not supported
I0211 15:28:15.975] error: replicationcontrollers "busybox1" resuming is not supported
I0211 15:28:15.975] has:replicationcontrollers "busybox0" resuming is not supported
I0211 15:28:15.976] Successful
I0211 15:28:15.976] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 15:28:15.976] error: replicationcontrollers "busybox0" resuming is not supported
I0211 15:28:15.976] error: replicationcontrollers "busybox1" resuming is not supported
I0211 15:28:15.976] has:replicationcontrollers "busybox0" resuming is not supported
I0211 15:28:16.043] replicationcontroller "busybox0" force deleted
I0211 15:28:16.046] replicationcontroller "busybox1" force deleted
W0211 15:28:16.147] I0211 15:28:15.612401   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898887-1141", Name:"busybox0", UID:"a69bfaca-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1150", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-7z2lz
W0211 15:28:16.147] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 15:28:16.147] I0211 15:28:15.614651   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898887-1141", Name:"busybox1", UID:"a69c892a-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1152", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-fsmxw
W0211 15:28:16.148] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 15:28:16.148] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 15:28:17.053] Recording: run_namespace_tests
I0211 15:28:17.053] Running command: run_namespace_tests
I0211 15:28:17.072] 
I0211 15:28:17.074] +++ Running case: test-cmd.run_namespace_tests 
I0211 15:28:17.076] +++ working dir: /go/src/k8s.io/kubernetes
I0211 15:28:17.078] +++ command: run_namespace_tests
I0211 15:28:17.087] +++ [0211 15:28:17] Testing kubectl(v1:namespaces)
I0211 15:28:17.156] namespace/my-namespace created
I0211 15:28:17.243] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0211 15:28:17.313] (Bnamespace "my-namespace" deleted
I0211 15:28:22.413] namespace/my-namespace condition met
I0211 15:28:22.498] Successful
I0211 15:28:22.498] message:Error from server (NotFound): namespaces "my-namespace" not found
I0211 15:28:22.499] has: not found
I0211 15:28:22.599] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0211 15:28:22.665] (Bnamespace/other created
I0211 15:28:22.756] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0211 15:28:22.843] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:28:23.010] (Bpod/valid-pod created
I0211 15:28:23.106] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 15:28:23.191] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 15:28:23.268] (BSuccessful
I0211 15:28:23.268] message:error: a resource cannot be retrieved by name across all namespaces
I0211 15:28:23.269] has:a resource cannot be retrieved by name across all namespaces
I0211 15:28:23.355] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 15:28:23.427] (Bpod "valid-pod" force deleted
I0211 15:28:23.518] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:28:23.588] (Bnamespace "other" deleted
W0211 15:28:23.689] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 15:28:25.005] E0211 15:28:25.004394   57381 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0211 15:28:25.241] I0211 15:28:25.240558   57381 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 15:28:25.341] I0211 15:28:25.340953   57381 controller_utils.go:1028] Caches are synced for garbage collector controller
W0211 15:28:26.653] I0211 15:28:26.652339   57381 horizontal.go:320] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1549898887-1141
W0211 15:28:26.656] I0211 15:28:26.655633   57381 horizontal.go:320] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1549898887-1141
W0211 15:28:27.411] I0211 15:28:27.410543   57381 namespace_controller.go:171] Namespace has been deleted my-namespace
I0211 15:28:28.708] +++ exit code: 0
... skipping 111 lines ...
I0211 15:28:43.927] +++ command: run_client_config_tests
I0211 15:28:43.939] +++ [0211 15:28:43] Creating namespace namespace-1549898923-23525
I0211 15:28:44.006] namespace/namespace-1549898923-23525 created
I0211 15:28:44.074] Context "test" modified.
I0211 15:28:44.082] +++ [0211 15:28:44] Testing client config
I0211 15:28:44.153] Successful
I0211 15:28:44.153] message:error: stat missing: no such file or directory
I0211 15:28:44.153] has:missing: no such file or directory
I0211 15:28:44.219] Successful
I0211 15:28:44.219] message:error: stat missing: no such file or directory
I0211 15:28:44.220] has:missing: no such file or directory
I0211 15:28:44.286] Successful
I0211 15:28:44.286] message:error: stat missing: no such file or directory
I0211 15:28:44.286] has:missing: no such file or directory
I0211 15:28:44.351] Successful
I0211 15:28:44.351] message:Error in configuration: context was not found for specified context: missing-context
I0211 15:28:44.351] has:context was not found for specified context: missing-context
I0211 15:28:44.416] Successful
I0211 15:28:44.416] message:error: no server found for cluster "missing-cluster"
I0211 15:28:44.416] has:no server found for cluster "missing-cluster"
I0211 15:28:44.481] Successful
I0211 15:28:44.482] message:error: auth info "missing-user" does not exist
I0211 15:28:44.482] has:auth info "missing-user" does not exist
I0211 15:28:44.611] Successful
I0211 15:28:44.611] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0211 15:28:44.611] has:Error loading config file
I0211 15:28:44.677] Successful
I0211 15:28:44.677] message:error: stat missing-config: no such file or directory
I0211 15:28:44.677] has:no such file or directory
I0211 15:28:44.689] +++ exit code: 0
I0211 15:28:44.721] Recording: run_service_accounts_tests
I0211 15:28:44.722] Running command: run_service_accounts_tests
I0211 15:28:44.741] 
I0211 15:28:44.743] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 34 lines ...
I0211 15:28:51.356] Labels:                        run=pi
I0211 15:28:51.356] Annotations:                   <none>
I0211 15:28:51.356] Schedule:                      59 23 31 2 *
I0211 15:28:51.356] Concurrency Policy:            Allow
I0211 15:28:51.356] Suspend:                       False
I0211 15:28:51.356] Successful Job History Limit:  824641800008
I0211 15:28:51.356] Failed Job History Limit:      1
I0211 15:28:51.356] Starting Deadline Seconds:     <unset>
I0211 15:28:51.356] Selector:                      <unset>
I0211 15:28:51.357] Parallelism:                   <unset>
I0211 15:28:51.357] Completions:                   <unset>
I0211 15:28:51.357] Pod Template:
I0211 15:28:51.357]   Labels:  run=pi
... skipping 31 lines ...
I0211 15:28:51.843]                 job-name=test-job
I0211 15:28:51.843]                 run=pi
I0211 15:28:51.843] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0211 15:28:51.844] Parallelism:    1
I0211 15:28:51.844] Completions:    1
I0211 15:28:51.844] Start Time:     Mon, 11 Feb 2019 15:28:51 +0000
I0211 15:28:51.844] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0211 15:28:51.844] Pod Template:
I0211 15:28:51.844]   Labels:  controller-uid=bc0f3a3e-2e11-11e9-b672-0242ac110002
I0211 15:28:51.844]            job-name=test-job
I0211 15:28:51.844]            run=pi
I0211 15:28:51.844]   Containers:
I0211 15:28:51.844]    pi:
... skipping 329 lines ...
I0211 15:29:01.333]   selector:
I0211 15:29:01.334]     role: padawan
I0211 15:29:01.334]   sessionAffinity: None
I0211 15:29:01.334]   type: ClusterIP
I0211 15:29:01.334] status:
I0211 15:29:01.334]   loadBalancer: {}
W0211 15:29:01.434] error: you must specify resources by --filename when --local is set.
W0211 15:29:01.434] Example resource specifications include:
W0211 15:29:01.435]    '-f rsrc.yaml'
W0211 15:29:01.435]    '--filename=rsrc.json'
I0211 15:29:01.535] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0211 15:29:01.653] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0211 15:29:01.728] (Bservice "redis-master" deleted
... skipping 93 lines ...
I0211 15:29:07.458] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 15:29:07.545] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0211 15:29:07.644] (Bdaemonset.extensions/bind rolled back
I0211 15:29:07.740] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 15:29:07.828] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 15:29:07.930] (BSuccessful
I0211 15:29:07.931] message:error: unable to find specified revision 1000000 in history
I0211 15:29:07.931] has:unable to find specified revision
I0211 15:29:08.018] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 15:29:08.107] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 15:29:08.203] (Bdaemonset.extensions/bind rolled back
I0211 15:29:08.295] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0211 15:29:08.383] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0211 15:29:09.765] Namespace:    namespace-1549898948-29295
I0211 15:29:09.765] Selector:     app=guestbook,tier=frontend
I0211 15:29:09.765] Labels:       app=guestbook
I0211 15:29:09.765]               tier=frontend
I0211 15:29:09.765] Annotations:  <none>
I0211 15:29:09.765] Replicas:     3 current / 3 desired
I0211 15:29:09.765] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:09.765] Pod Template:
I0211 15:29:09.766]   Labels:  app=guestbook
I0211 15:29:09.766]            tier=frontend
I0211 15:29:09.766]   Containers:
I0211 15:29:09.766]    php-redis:
I0211 15:29:09.766]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 15:29:09.873] Namespace:    namespace-1549898948-29295
I0211 15:29:09.873] Selector:     app=guestbook,tier=frontend
I0211 15:29:09.873] Labels:       app=guestbook
I0211 15:29:09.873]               tier=frontend
I0211 15:29:09.873] Annotations:  <none>
I0211 15:29:09.873] Replicas:     3 current / 3 desired
I0211 15:29:09.874] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:09.874] Pod Template:
I0211 15:29:09.874]   Labels:  app=guestbook
I0211 15:29:09.874]            tier=frontend
I0211 15:29:09.874]   Containers:
I0211 15:29:09.874]    php-redis:
I0211 15:29:09.874]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0211 15:29:09.977] Namespace:    namespace-1549898948-29295
I0211 15:29:09.978] Selector:     app=guestbook,tier=frontend
I0211 15:29:09.978] Labels:       app=guestbook
I0211 15:29:09.978]               tier=frontend
I0211 15:29:09.978] Annotations:  <none>
I0211 15:29:09.978] Replicas:     3 current / 3 desired
I0211 15:29:09.978] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:09.978] Pod Template:
I0211 15:29:09.978]   Labels:  app=guestbook
I0211 15:29:09.978]            tier=frontend
I0211 15:29:09.978]   Containers:
I0211 15:29:09.978]    php-redis:
I0211 15:29:09.978]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 19 lines ...
I0211 15:29:10.182] Namespace:    namespace-1549898948-29295
I0211 15:29:10.182] Selector:     app=guestbook,tier=frontend
I0211 15:29:10.182] Labels:       app=guestbook
I0211 15:29:10.182]               tier=frontend
I0211 15:29:10.182] Annotations:  <none>
I0211 15:29:10.182] Replicas:     3 current / 3 desired
I0211 15:29:10.182] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:10.183] Pod Template:
I0211 15:29:10.183]   Labels:  app=guestbook
I0211 15:29:10.183]            tier=frontend
I0211 15:29:10.183]   Containers:
I0211 15:29:10.183]    php-redis:
I0211 15:29:10.183]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0211 15:29:10.221] Namespace:    namespace-1549898948-29295
I0211 15:29:10.221] Selector:     app=guestbook,tier=frontend
I0211 15:29:10.221] Labels:       app=guestbook
I0211 15:29:10.221]               tier=frontend
I0211 15:29:10.221] Annotations:  <none>
I0211 15:29:10.222] Replicas:     3 current / 3 desired
I0211 15:29:10.222] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:10.222] Pod Template:
I0211 15:29:10.222]   Labels:  app=guestbook
I0211 15:29:10.222]            tier=frontend
I0211 15:29:10.222]   Containers:
I0211 15:29:10.222]    php-redis:
I0211 15:29:10.222]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 15:29:10.323] Namespace:    namespace-1549898948-29295
I0211 15:29:10.323] Selector:     app=guestbook,tier=frontend
I0211 15:29:10.323] Labels:       app=guestbook
I0211 15:29:10.323]               tier=frontend
I0211 15:29:10.323] Annotations:  <none>
I0211 15:29:10.323] Replicas:     3 current / 3 desired
I0211 15:29:10.323] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:10.323] Pod Template:
I0211 15:29:10.323]   Labels:  app=guestbook
I0211 15:29:10.323]            tier=frontend
I0211 15:29:10.323]   Containers:
I0211 15:29:10.323]    php-redis:
I0211 15:29:10.324]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 15:29:10.420] Namespace:    namespace-1549898948-29295
I0211 15:29:10.420] Selector:     app=guestbook,tier=frontend
I0211 15:29:10.420] Labels:       app=guestbook
I0211 15:29:10.420]               tier=frontend
I0211 15:29:10.420] Annotations:  <none>
I0211 15:29:10.420] Replicas:     3 current / 3 desired
I0211 15:29:10.421] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:10.421] Pod Template:
I0211 15:29:10.421]   Labels:  app=guestbook
I0211 15:29:10.421]            tier=frontend
I0211 15:29:10.421]   Containers:
I0211 15:29:10.421]    php-redis:
I0211 15:29:10.421]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0211 15:29:10.522] Namespace:    namespace-1549898948-29295
I0211 15:29:10.522] Selector:     app=guestbook,tier=frontend
I0211 15:29:10.522] Labels:       app=guestbook
I0211 15:29:10.522]               tier=frontend
I0211 15:29:10.522] Annotations:  <none>
I0211 15:29:10.522] Replicas:     3 current / 3 desired
I0211 15:29:10.522] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:10.522] Pod Template:
I0211 15:29:10.522]   Labels:  app=guestbook
I0211 15:29:10.522]            tier=frontend
I0211 15:29:10.522]   Containers:
I0211 15:29:10.523]    php-redis:
I0211 15:29:10.523]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0211 15:29:11.296] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0211 15:29:11.384] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I0211 15:29:11.465] (Breplicationcontroller/frontend scaled
I0211 15:29:11.551] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
I0211 15:29:11.626] (Breplicationcontroller "frontend" deleted
W0211 15:29:11.727] I0211 15:29:10.697717   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898948-29295", Name:"frontend", UID:"c6bedd87-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1403", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-z4qr2
W0211 15:29:11.727] error: Expected replicas to be 3, was 2
W0211 15:29:11.727] I0211 15:29:11.211094   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898948-29295", Name:"frontend", UID:"c6bedd87-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1410", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qjszv
W0211 15:29:11.727] I0211 15:29:11.469900   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898948-29295", Name:"frontend", UID:"c6bedd87-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1416", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-qjszv
W0211 15:29:11.789] I0211 15:29:11.788805   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898948-29295", Name:"redis-master", UID:"c817c478-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1427", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-bdtq5
I0211 15:29:11.889] replicationcontroller/redis-master created
I0211 15:29:11.949] replicationcontroller/redis-slave created
I0211 15:29:12.048] replicationcontroller/redis-master scaled
... skipping 29 lines ...
I0211 15:29:13.482] service "expose-test-deployment" deleted
I0211 15:29:13.575] Successful
I0211 15:29:13.575] message:service/expose-test-deployment exposed
I0211 15:29:13.575] has:service/expose-test-deployment exposed
I0211 15:29:13.650] service "expose-test-deployment" deleted
I0211 15:29:13.735] Successful
I0211 15:29:13.736] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0211 15:29:13.736] See 'kubectl expose -h' for help and examples
I0211 15:29:13.736] has:invalid deployment: no selectors
I0211 15:29:13.820] Successful
I0211 15:29:13.821] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0211 15:29:13.821] See 'kubectl expose -h' for help and examples
I0211 15:29:13.821] has:invalid deployment: no selectors
W0211 15:29:13.921] I0211 15:29:12.884340   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment", UID:"c8bed8aa-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1481", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-64bb598779 to 3
W0211 15:29:13.922] I0211 15:29:12.887494   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-64bb598779", UID:"c8bf70f8-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-kqgk9
W0211 15:29:13.922] I0211 15:29:12.889966   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-64bb598779", UID:"c8bf70f8-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-z6xdw
W0211 15:29:13.922] I0211 15:29:12.890206   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-64bb598779", UID:"c8bf70f8-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1482", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-xk65d
... skipping 30 lines ...
I0211 15:29:15.844] service "frontend-4" deleted
I0211 15:29:15.851] service "frontend-5" deleted
W0211 15:29:15.951] I0211 15:29:14.497703   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898948-29295", Name:"frontend", UID:"c9b52615-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1560", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6mrn5
W0211 15:29:15.952] I0211 15:29:14.500544   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898948-29295", Name:"frontend", UID:"c9b52615-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1560", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-smvg9
W0211 15:29:15.952] I0211 15:29:14.500726   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898948-29295", Name:"frontend", UID:"c9b52615-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1560", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7lk6d
I0211 15:29:16.052] Successful
I0211 15:29:16.053] message:error: cannot expose a Node
I0211 15:29:16.053] has:cannot expose
I0211 15:29:16.053] Successful
I0211 15:29:16.053] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0211 15:29:16.053] has:metadata.name: Invalid value
I0211 15:29:16.137] Successful
I0211 15:29:16.137] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0211 15:29:18.230] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0211 15:29:18.321] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0211 15:29:18.395] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0211 15:29:18.496] I0211 15:29:17.805003   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898948-29295", Name:"frontend", UID:"cbadb36c-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1651", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-b4ljt
W0211 15:29:18.496] I0211 15:29:17.807552   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898948-29295", Name:"frontend", UID:"cbadb36c-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1651", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fvffm
W0211 15:29:18.496] I0211 15:29:17.808056   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898948-29295", Name:"frontend", UID:"cbadb36c-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"1651", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rh4md
W0211 15:29:18.496] Error: required flag(s) "max" not set
W0211 15:29:18.496] 
W0211 15:29:18.497] 
W0211 15:29:18.497] Examples:
W0211 15:29:18.497]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0211 15:29:18.497]   kubectl autoscale deployment foo --min=2 --max=10
W0211 15:29:18.497]   
... skipping 54 lines ...
I0211 15:29:18.705]           limits:
I0211 15:29:18.705]             cpu: 300m
I0211 15:29:18.705]           requests:
I0211 15:29:18.705]             cpu: 300m
I0211 15:29:18.705]       terminationGracePeriodSeconds: 0
I0211 15:29:18.705] status: {}
W0211 15:29:18.806] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0211 15:29:18.960] deployment.apps/nginx-deployment-resources created
I0211 15:29:19.062] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I0211 15:29:19.154] (Bcore.sh:1253: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 15:29:19.242] (Bcore.sh:1254: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0211 15:29:19.321] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I0211 15:29:19.413] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 85 lines ...
W0211 15:29:20.402] I0211 15:29:18.964360   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources", UID:"cc5e8e80-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-695c766d58 to 3
W0211 15:29:20.402] I0211 15:29:18.967642   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources-695c766d58", UID:"cc5f1485-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-fdmjf
W0211 15:29:20.403] I0211 15:29:18.970283   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources-695c766d58", UID:"cc5f1485-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-7k262
W0211 15:29:20.403] I0211 15:29:18.970691   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources-695c766d58", UID:"cc5f1485-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1672", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-kbppx
W0211 15:29:20.403] I0211 15:29:19.323910   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources", UID:"cc5e8e80-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1685", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5b7fc6dd8b to 1
W0211 15:29:20.404] I0211 15:29:19.326713   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources-5b7fc6dd8b", UID:"cc960b24-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1686", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5b7fc6dd8b-kfx2p
W0211 15:29:20.404] error: unable to find container named redis
W0211 15:29:20.404] I0211 15:29:19.680547   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources", UID:"cc5e8e80-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-695c766d58 to 2
W0211 15:29:20.404] I0211 15:29:19.685036   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources-695c766d58", UID:"cc5f1485-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-695c766d58-fdmjf
W0211 15:29:20.404] I0211 15:29:19.686512   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources", UID:"cc5e8e80-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1699", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6bc4567bf6 to 1
W0211 15:29:20.405] I0211 15:29:19.688148   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources-6bc4567bf6", UID:"cccbad4d-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1704", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6bc4567bf6-65dbx
W0211 15:29:20.405] I0211 15:29:19.947232   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources", UID:"cc5e8e80-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1716", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-695c766d58 to 1
W0211 15:29:20.405] I0211 15:29:19.951533   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources-695c766d58", UID:"cc5f1485-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1720", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-695c766d58-7k262
W0211 15:29:20.405] I0211 15:29:19.951687   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources", UID:"cc5e8e80-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1719", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-bc7ccd667 to 1
W0211 15:29:20.406] I0211 15:29:19.954047   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898948-29295", Name:"nginx-deployment-resources-bc7ccd667", UID:"ccf44919-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1724", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-bc7ccd667-j7jjj
W0211 15:29:20.406] error: you must specify resources by --filename when --local is set.
W0211 15:29:20.406] Example resource specifications include:
W0211 15:29:20.406]    '-f rsrc.yaml'
W0211 15:29:20.406]    '--filename=rsrc.json'
I0211 15:29:20.507] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0211 15:29:20.545] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0211 15:29:20.633] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0211 15:29:22.098]                 pod-template-hash=7875bf5c8b
I0211 15:29:22.098] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0211 15:29:22.099]                 deployment.kubernetes.io/max-replicas: 2
I0211 15:29:22.099]                 deployment.kubernetes.io/revision: 1
I0211 15:29:22.099] Controlled By:  Deployment/test-nginx-apps
I0211 15:29:22.099] Replicas:       1 current / 1 desired
I0211 15:29:22.099] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:22.099] Pod Template:
I0211 15:29:22.099]   Labels:  app=test-nginx-apps
I0211 15:29:22.099]            pod-template-hash=7875bf5c8b
I0211 15:29:22.099]   Containers:
I0211 15:29:22.099]    nginx:
I0211 15:29:22.099]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
W0211 15:29:26.081] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0211 15:29:26.082] I0211 15:29:25.586258   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898960-9785", Name:"nginx", UID:"d002b716-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1889", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6458c7c55b to 1
W0211 15:29:26.082] I0211 15:29:25.589133   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898960-9785", Name:"nginx-6458c7c55b", UID:"d0519a8f-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1890", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6458c7c55b-c4sf8
I0211 15:29:27.071] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 15:29:27.256] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 15:29:27.354] (Bdeployment.extensions/nginx rolled back
W0211 15:29:27.455] error: unable to find specified revision 1000000 in history
I0211 15:29:28.447] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0211 15:29:28.542] (Bdeployment.extensions/nginx paused
W0211 15:29:28.645] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0211 15:29:28.746] deployment.extensions/nginx resumed
I0211 15:29:28.845] deployment.extensions/nginx rolled back
I0211 15:29:29.024]     deployment.kubernetes.io/revision-history: 1,3
W0211 15:29:29.209] error: desired revision (3) is different from the running revision (5)
I0211 15:29:29.369] deployment.apps/nginx2 created
I0211 15:29:29.455] deployment.extensions "nginx2" deleted
I0211 15:29:29.547] deployment.extensions "nginx" deleted
I0211 15:29:29.650] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:29:29.804] (Bdeployment.apps/nginx-deployment created
I0211 15:29:29.905] apps.sh:332: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
... skipping 25 lines ...
W0211 15:29:32.177] I0211 15:29:29.807781   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment", UID:"d2d514d5-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1952", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-79b6f6d8f5 to 3
W0211 15:29:32.177] I0211 15:29:29.810356   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment-79b6f6d8f5", UID:"d2d5bfa8-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-79b6f6d8f5-khvq8
W0211 15:29:32.178] I0211 15:29:29.812215   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment-79b6f6d8f5", UID:"d2d5bfa8-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-79b6f6d8f5-5hn8d
W0211 15:29:32.178] I0211 15:29:29.812693   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment-79b6f6d8f5", UID:"d2d5bfa8-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1953", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-79b6f6d8f5-2qgwx
W0211 15:29:32.178] I0211 15:29:30.168122   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment", UID:"d2d514d5-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1966", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5bfd55c857 to 1
W0211 15:29:32.178] I0211 15:29:30.170935   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment-5bfd55c857", UID:"d30cc19e-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1967", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5bfd55c857-tqkjf
W0211 15:29:32.179] error: unable to find container named "redis"
W0211 15:29:32.179] I0211 15:29:31.295425   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment", UID:"d2d514d5-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1985", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-79b6f6d8f5 to 2
W0211 15:29:32.179] I0211 15:29:31.299126   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment-79b6f6d8f5", UID:"d2d5bfa8-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1989", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-79b6f6d8f5-khvq8
W0211 15:29:32.179] I0211 15:29:31.301305   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment", UID:"d2d514d5-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1988", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6c69c955c7 to 1
W0211 15:29:32.180] I0211 15:29:31.304201   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment-6c69c955c7", UID:"d3b7fb92-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1993", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6c69c955c7-4b8hn
W0211 15:29:32.180] I0211 15:29:32.078116   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment", UID:"d42fa3a6-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2018", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-79b6f6d8f5 to 3
W0211 15:29:32.180] I0211 15:29:32.080713   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment-79b6f6d8f5", UID:"d4302912-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2019", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-79b6f6d8f5-lgxm6
... skipping 45 lines ...
W0211 15:29:34.529] I0211 15:29:33.239349   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment-7cd48fcfbc", UID:"d4df1e0e-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2072", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7cd48fcfbc-jnfmj
W0211 15:29:34.530] I0211 15:29:33.325232   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment", UID:"d42fa3a6-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2084", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-79b6f6d8f5 to 1
W0211 15:29:34.530] I0211 15:29:33.329495   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment-79b6f6d8f5", UID:"d4302912-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2088", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-79b6f6d8f5-hsxnj
W0211 15:29:34.530] I0211 15:29:33.331851   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment", UID:"d42fa3a6-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2086", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-76c5fccf8b to 1
W0211 15:29:34.530] I0211 15:29:33.335201   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment-76c5fccf8b", UID:"d4ed9034-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2094", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-76c5fccf8b-c2h4s
W0211 15:29:34.531] I0211 15:29:33.476809   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment", UID:"d42fa3a6-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2105", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-79b6f6d8f5 to 0
W0211 15:29:34.531] E0211 15:29:33.546214   57381 replica_set.go:450] Sync "namespace-1549898960-9785/nginx-deployment-79b6f6d8f5" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-79b6f6d8f5": the object has been modified; please apply your changes to the latest version and try again
W0211 15:29:34.531] I0211 15:29:33.599232   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment-79b6f6d8f5", UID:"d4302912-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2109", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-79b6f6d8f5-c6dd7
W0211 15:29:34.531] I0211 15:29:33.627400   57381 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment", UID:"d42fa3a6-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2111", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-687fbc687d to 1
W0211 15:29:34.532] I0211 15:29:33.747273   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898960-9785", Name:"nginx-deployment-687fbc687d", UID:"d51c6f5f-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2116", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-687fbc687d-t86rv
W0211 15:29:34.532] E0211 15:29:33.945776   57381 replica_set.go:450] Sync "namespace-1549898960-9785/nginx-deployment-79b6f6d8f5" failed with replicasets.apps "nginx-deployment-79b6f6d8f5" not found
W0211 15:29:34.532] E0211 15:29:33.995972   57381 replica_set.go:450] Sync "namespace-1549898960-9785/nginx-deployment-687fbc687d" failed with replicasets.apps "nginx-deployment-687fbc687d" not found
W0211 15:29:34.532] I0211 15:29:34.341234   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898973-14003", Name:"frontend", UID:"d588a71f-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2142", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ghkg8
W0211 15:29:34.533] I0211 15:29:34.343910   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898973-14003", Name:"frontend", UID:"d588a71f-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2142", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-g82zx
W0211 15:29:34.533] I0211 15:29:34.345602   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898973-14003", Name:"frontend", UID:"d588a71f-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2142", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-tc4c2
W0211 15:29:34.533] E0211 15:29:34.495459   57381 replica_set.go:450] Sync "namespace-1549898973-14003/frontend" failed with replicasets.apps "frontend" not found
I0211 15:29:34.633] apps.sh:508: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:29:34.634] (Bapps.sh:512: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:29:34.782] (Breplicaset.apps/frontend-no-cascade created
I0211 15:29:34.877] apps.sh:518: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0211 15:29:34.880] (B+++ [0211 15:29:34] Deleting rs
I0211 15:29:34.957] replicaset.extensions "frontend-no-cascade" deleted
... skipping 11 lines ...
I0211 15:29:35.795] Namespace:    namespace-1549898973-14003
I0211 15:29:35.795] Selector:     app=guestbook,tier=frontend
I0211 15:29:35.795] Labels:       app=guestbook
I0211 15:29:35.795]               tier=frontend
I0211 15:29:35.795] Annotations:  <none>
I0211 15:29:35.795] Replicas:     3 current / 3 desired
I0211 15:29:35.795] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:35.795] Pod Template:
I0211 15:29:35.795]   Labels:  app=guestbook
I0211 15:29:35.796]            tier=frontend
I0211 15:29:35.796]   Containers:
I0211 15:29:35.796]    php-redis:
I0211 15:29:35.796]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0211 15:29:35.901] Namespace:    namespace-1549898973-14003
I0211 15:29:35.901] Selector:     app=guestbook,tier=frontend
I0211 15:29:35.902] Labels:       app=guestbook
I0211 15:29:35.902]               tier=frontend
I0211 15:29:35.902] Annotations:  <none>
I0211 15:29:35.902] Replicas:     3 current / 3 desired
I0211 15:29:35.902] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:35.902] Pod Template:
I0211 15:29:35.902]   Labels:  app=guestbook
I0211 15:29:35.902]            tier=frontend
I0211 15:29:35.902]   Containers:
I0211 15:29:35.902]    php-redis:
I0211 15:29:35.902]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0211 15:29:36.003] Namespace:    namespace-1549898973-14003
I0211 15:29:36.003] Selector:     app=guestbook,tier=frontend
I0211 15:29:36.003] Labels:       app=guestbook
I0211 15:29:36.003]               tier=frontend
I0211 15:29:36.004] Annotations:  <none>
I0211 15:29:36.004] Replicas:     3 current / 3 desired
I0211 15:29:36.004] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:36.004] Pod Template:
I0211 15:29:36.004]   Labels:  app=guestbook
I0211 15:29:36.004]            tier=frontend
I0211 15:29:36.004]   Containers:
I0211 15:29:36.004]    php-redis:
I0211 15:29:36.004]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 7 lines ...
I0211 15:29:36.005]     Mounts:            <none>
I0211 15:29:36.005]   Volumes:             <none>
I0211 15:29:36.005] (B
W0211 15:29:36.105] I0211 15:29:34.784614   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898973-14003", Name:"frontend-no-cascade", UID:"d5cca913-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2158", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-8jjjh
W0211 15:29:36.106] I0211 15:29:34.787037   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898973-14003", Name:"frontend-no-cascade", UID:"d5cca913-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2158", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-j2rh4
W0211 15:29:36.106] I0211 15:29:34.787513   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898973-14003", Name:"frontend-no-cascade", UID:"d5cca913-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2158", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-lc9xc
W0211 15:29:36.106] E0211 15:29:35.044796   57381 replica_set.go:450] Sync "namespace-1549898973-14003/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
W0211 15:29:36.107] I0211 15:29:35.563781   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898973-14003", Name:"frontend", UID:"d643888e-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2180", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nwbjk
W0211 15:29:36.107] I0211 15:29:35.566596   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898973-14003", Name:"frontend", UID:"d643888e-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2180", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-wqtzw
W0211 15:29:36.107] I0211 15:29:35.566632   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898973-14003", Name:"frontend", UID:"d643888e-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2180", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gmtc4
I0211 15:29:36.207] apps.sh:543: Successful describe
I0211 15:29:36.208] Name:         frontend
I0211 15:29:36.208] Namespace:    namespace-1549898973-14003
I0211 15:29:36.208] Selector:     app=guestbook,tier=frontend
I0211 15:29:36.208] Labels:       app=guestbook
I0211 15:29:36.208]               tier=frontend
I0211 15:29:36.208] Annotations:  <none>
I0211 15:29:36.208] Replicas:     3 current / 3 desired
I0211 15:29:36.208] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:36.208] Pod Template:
I0211 15:29:36.208]   Labels:  app=guestbook
I0211 15:29:36.208]            tier=frontend
I0211 15:29:36.209]   Containers:
I0211 15:29:36.209]    php-redis:
I0211 15:29:36.209]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0211 15:29:36.239] Namespace:    namespace-1549898973-14003
I0211 15:29:36.239] Selector:     app=guestbook,tier=frontend
I0211 15:29:36.239] Labels:       app=guestbook
I0211 15:29:36.239]               tier=frontend
I0211 15:29:36.239] Annotations:  <none>
I0211 15:29:36.239] Replicas:     3 current / 3 desired
I0211 15:29:36.239] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:36.239] Pod Template:
I0211 15:29:36.239]   Labels:  app=guestbook
I0211 15:29:36.239]            tier=frontend
I0211 15:29:36.239]   Containers:
I0211 15:29:36.239]    php-redis:
I0211 15:29:36.240]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0211 15:29:36.341] Namespace:    namespace-1549898973-14003
I0211 15:29:36.341] Selector:     app=guestbook,tier=frontend
I0211 15:29:36.341] Labels:       app=guestbook
I0211 15:29:36.341]               tier=frontend
I0211 15:29:36.341] Annotations:  <none>
I0211 15:29:36.341] Replicas:     3 current / 3 desired
I0211 15:29:36.342] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:36.342] Pod Template:
I0211 15:29:36.342]   Labels:  app=guestbook
I0211 15:29:36.342]            tier=frontend
I0211 15:29:36.342]   Containers:
I0211 15:29:36.342]    php-redis:
I0211 15:29:36.342]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0211 15:29:36.440] Namespace:    namespace-1549898973-14003
I0211 15:29:36.440] Selector:     app=guestbook,tier=frontend
I0211 15:29:36.440] Labels:       app=guestbook
I0211 15:29:36.440]               tier=frontend
I0211 15:29:36.440] Annotations:  <none>
I0211 15:29:36.441] Replicas:     3 current / 3 desired
I0211 15:29:36.441] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:36.441] Pod Template:
I0211 15:29:36.441]   Labels:  app=guestbook
I0211 15:29:36.441]            tier=frontend
I0211 15:29:36.441]   Containers:
I0211 15:29:36.441]    php-redis:
I0211 15:29:36.441]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0211 15:29:36.542] Namespace:    namespace-1549898973-14003
I0211 15:29:36.542] Selector:     app=guestbook,tier=frontend
I0211 15:29:36.542] Labels:       app=guestbook
I0211 15:29:36.542]               tier=frontend
I0211 15:29:36.543] Annotations:  <none>
I0211 15:29:36.543] Replicas:     3 current / 3 desired
I0211 15:29:36.543] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:36.543] Pod Template:
I0211 15:29:36.543]   Labels:  app=guestbook
I0211 15:29:36.543]            tier=frontend
I0211 15:29:36.543]   Containers:
I0211 15:29:36.543]    php-redis:
I0211 15:29:36.544]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0211 15:29:41.672] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0211 15:29:41.765] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0211 15:29:41.842] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0211 15:29:41.942] I0211 15:29:41.234953   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898973-14003", Name:"frontend", UID:"d9a4e242-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-w6gv7
W0211 15:29:41.943] I0211 15:29:41.237162   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898973-14003", Name:"frontend", UID:"d9a4e242-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bb8r8
W0211 15:29:41.943] I0211 15:29:41.237389   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549898973-14003", Name:"frontend", UID:"d9a4e242-2e11-11e9-b672-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2371", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nz5bj
W0211 15:29:41.943] Error: required flag(s) "max" not set
W0211 15:29:41.943] 
W0211 15:29:41.944] 
W0211 15:29:41.944] Examples:
W0211 15:29:41.944]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0211 15:29:41.944]   kubectl autoscale deployment foo --min=2 --max=10
W0211 15:29:41.944]   
... skipping 85 lines ...
I0211 15:29:44.849] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 15:29:44.939] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0211 15:29:45.040] (Bstatefulset.apps/nginx rolled back
I0211 15:29:45.136] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0211 15:29:45.225] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 15:29:45.328] (BSuccessful
I0211 15:29:45.329] message:error: unable to find specified revision 1000000 in history
I0211 15:29:45.329] has:unable to find specified revision
I0211 15:29:45.419] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0211 15:29:45.507] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 15:29:45.604] (Bstatefulset.apps/nginx rolled back
I0211 15:29:45.699] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0211 15:29:45.788] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 61 lines ...
I0211 15:29:47.534] Name:         mock
I0211 15:29:47.534] Namespace:    namespace-1549898986-24400
I0211 15:29:47.534] Selector:     app=mock
I0211 15:29:47.534] Labels:       app=mock
I0211 15:29:47.534] Annotations:  <none>
I0211 15:29:47.534] Replicas:     1 current / 1 desired
I0211 15:29:47.535] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:47.535] Pod Template:
I0211 15:29:47.535]   Labels:  app=mock
I0211 15:29:47.535]   Containers:
I0211 15:29:47.535]    mock-container:
I0211 15:29:47.535]     Image:        k8s.gcr.io/pause:2.0
I0211 15:29:47.535]     Port:         9949/TCP
... skipping 56 lines ...
I0211 15:29:49.690] Name:         mock
I0211 15:29:49.690] Namespace:    namespace-1549898986-24400
I0211 15:29:49.690] Selector:     app=mock
I0211 15:29:49.690] Labels:       app=mock
I0211 15:29:49.690] Annotations:  <none>
I0211 15:29:49.690] Replicas:     1 current / 1 desired
I0211 15:29:49.690] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:49.690] Pod Template:
I0211 15:29:49.691]   Labels:  app=mock
I0211 15:29:49.691]   Containers:
I0211 15:29:49.691]    mock-container:
I0211 15:29:49.691]     Image:        k8s.gcr.io/pause:2.0
I0211 15:29:49.691]     Port:         9949/TCP
... skipping 56 lines ...
I0211 15:29:51.829] Name:         mock
I0211 15:29:51.829] Namespace:    namespace-1549898986-24400
I0211 15:29:51.829] Selector:     app=mock
I0211 15:29:51.829] Labels:       app=mock
I0211 15:29:51.829] Annotations:  <none>
I0211 15:29:51.830] Replicas:     1 current / 1 desired
I0211 15:29:51.830] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:51.830] Pod Template:
I0211 15:29:51.830]   Labels:  app=mock
I0211 15:29:51.830]   Containers:
I0211 15:29:51.830]    mock-container:
I0211 15:29:51.830]     Image:        k8s.gcr.io/pause:2.0
I0211 15:29:51.830]     Port:         9949/TCP
... skipping 42 lines ...
I0211 15:29:53.801] Namespace:    namespace-1549898986-24400
I0211 15:29:53.801] Selector:     app=mock
I0211 15:29:53.801] Labels:       app=mock
I0211 15:29:53.801]               status=replaced
I0211 15:29:53.801] Annotations:  <none>
I0211 15:29:53.802] Replicas:     1 current / 1 desired
I0211 15:29:53.802] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:53.802] Pod Template:
I0211 15:29:53.802]   Labels:  app=mock
I0211 15:29:53.802]   Containers:
I0211 15:29:53.802]    mock-container:
I0211 15:29:53.802]     Image:        k8s.gcr.io/pause:2.0
I0211 15:29:53.802]     Port:         9949/TCP
... skipping 11 lines ...
I0211 15:29:53.803] Namespace:    namespace-1549898986-24400
I0211 15:29:53.803] Selector:     app=mock2
I0211 15:29:53.803] Labels:       app=mock2
I0211 15:29:53.803]               status=replaced
I0211 15:29:53.803] Annotations:  <none>
I0211 15:29:53.803] Replicas:     1 current / 1 desired
I0211 15:29:53.803] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 15:29:53.804] Pod Template:
I0211 15:29:53.804]   Labels:  app=mock2
I0211 15:29:53.804]   Containers:
I0211 15:29:53.804]    mock-container:
I0211 15:29:53.804]     Image:        k8s.gcr.io/pause:2.0
I0211 15:29:53.804]     Port:         9949/TCP
... skipping 107 lines ...
I0211 15:29:58.463] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 15:29:58.626] (Bpersistentvolume/pv0001 created
I0211 15:29:58.724] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0211 15:29:58.799] (Bpersistentvolume "pv0001" deleted
W0211 15:29:58.901] I0211 15:29:56.406276   57381 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1549898973-14003
W0211 15:29:58.901] I0211 15:29:57.561711   57381 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549898986-24400", Name:"mock", UID:"e3606c61-2e11-11e9-b672-0242ac110002", APIVersion:"v1", ResourceVersion:"2639", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-nnxc7
W0211 15:29:58.970] E0211 15:29:58.969594   57381 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I0211 15:29:59.070] persistentvolume/pv0002 created
I0211 15:29:59.071] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0211 15:29:59.141] (Bpersistentvolume "pv0002" deleted
I0211 15:29:59.300] persistentvolume/pv0003 created
I0211 15:29:59.402] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0211 15:29:59.472] (Bpersistentvolume "pv0003" deleted
... skipping 470 lines ...
I0211 15:30:03.991] yes
I0211 15:30:03.991] has:the server doesn't have a resource type
I0211 15:30:04.063] Successful
I0211 15:30:04.064] message:yes
I0211 15:30:04.064] has:yes
I0211 15:30:04.133] Successful
I0211 15:30:04.134] message:error: --subresource can not be used with NonResourceURL
I0211 15:30:04.134] has:subresource can not be used with NonResourceURL
I0211 15:30:04.210] Successful
I0211 15:30:04.291] Successful
I0211 15:30:04.291] message:yes
I0211 15:30:04.291] 0
I0211 15:30:04.291] has:0
... skipping 6 lines ...
I0211 15:30:04.470] role.rbac.authorization.k8s.io/testing-R reconciled
I0211 15:30:04.560] legacy-script.sh:745: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0211 15:30:04.646] (Blegacy-script.sh:746: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0211 15:30:04.732] (Blegacy-script.sh:747: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0211 15:30:04.824] (Blegacy-script.sh:748: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0211 15:30:04.901] (BSuccessful
I0211 15:30:04.902] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0211 15:30:04.902] has:only rbac.authorization.k8s.io/v1 is supported
I0211 15:30:04.984] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0211 15:30:04.988] role.rbac.authorization.k8s.io "testing-R" deleted
I0211 15:30:04.995] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0211 15:30:05.001] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0211 15:30:05.011] Recording: run_retrieve_multiple_tests
... skipping 1017 lines ...
I0211 15:30:32.577] message:node/127.0.0.1 already uncordoned (dry run)
I0211 15:30:32.577] has:already uncordoned
I0211 15:30:32.667] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0211 15:30:32.741] (Bnode/127.0.0.1 labeled
I0211 15:30:32.831] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0211 15:30:32.894] (BSuccessful
I0211 15:30:32.894] message:error: cannot specify both a node name and a --selector option
I0211 15:30:32.894] See 'kubectl drain -h' for help and examples
I0211 15:30:32.894] has:cannot specify both a node name
I0211 15:30:32.962] Successful
I0211 15:30:32.962] message:error: USAGE: cordon NODE [flags]
I0211 15:30:32.963] See 'kubectl cordon -h' for help and examples
I0211 15:30:32.963] has:error\: USAGE\: cordon NODE
I0211 15:30:33.034] node/127.0.0.1 already uncordoned
I0211 15:30:33.106] Successful
I0211 15:30:33.106] message:error: You must provide one or more resources by argument or filename.
I0211 15:30:33.106] Example resource specifications include:
I0211 15:30:33.106]    '-f rsrc.yaml'
I0211 15:30:33.106]    '--filename=rsrc.json'
I0211 15:30:33.106]    '<resource> <name>'
I0211 15:30:33.106]    '<resource>'
I0211 15:30:33.107] has:must provide one or more resources
... skipping 15 lines ...
I0211 15:30:33.521] Successful
I0211 15:30:33.521] message:The following compatible plugins are available:
I0211 15:30:33.521] 
I0211 15:30:33.521] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0211 15:30:33.521]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0211 15:30:33.521] 
I0211 15:30:33.521] error: one plugin warning was found
I0211 15:30:33.522] has:kubectl-version overwrites existing command: "kubectl version"
I0211 15:30:33.594] Successful
I0211 15:30:33.594] message:The following compatible plugins are available:
I0211 15:30:33.594] 
I0211 15:30:33.594] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0211 15:30:33.594] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0211 15:30:33.595]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0211 15:30:33.595] 
I0211 15:30:33.595] error: one plugin warning was found
I0211 15:30:33.595] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0211 15:30:33.666] Successful
I0211 15:30:33.666] message:The following compatible plugins are available:
I0211 15:30:33.666] 
I0211 15:30:33.666] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0211 15:30:33.667] has:plugins are available
I0211 15:30:33.740] Successful
I0211 15:30:33.740] message:
I0211 15:30:33.740] error: unable to find any kubectl plugins in your PATH
I0211 15:30:33.740] has:unable to find any kubectl plugins in your PATH
I0211 15:30:33.806] Successful
I0211 15:30:33.806] message:I am plugin foo
I0211 15:30:33.806] has:plugin foo
I0211 15:30:33.875] Successful
I0211 15:30:33.875] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.2.523+ab8071f58364f6", GitCommit:"ab8071f58364f671567ac5dd9350a78d57a86a7a", GitTreeState:"clean", BuildDate:"2019-02-11T15:23:55Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0211 15:30:33.944] 
I0211 15:30:33.945] +++ Running case: test-cmd.run_impersonation_tests 
I0211 15:30:33.947] +++ working dir: /go/src/k8s.io/kubernetes
I0211 15:30:33.950] +++ command: run_impersonation_tests
I0211 15:30:33.958] +++ [0211 15:30:33] Testing impersonation
I0211 15:30:34.028] Successful
I0211 15:30:34.028] message:error: requesting groups or user-extra for  without impersonating a user
I0211 15:30:34.028] has:without impersonating a user
I0211 15:30:34.202] certificatesigningrequest.certificates.k8s.io/foo created
I0211 15:30:34.296] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0211 15:30:34.380] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0211 15:30:34.458] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0211 15:30:34.627] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 40 lines ...
W0211 15:30:37.670] I0211 15:30:37.669058   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.670] I0211 15:30:37.669043   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.670] I0211 15:30:37.665993   54036 crd_finalizer.go:254] Shutting down CRDFinalizer
W0211 15:30:37.671] I0211 15:30:37.669169   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.671] I0211 15:30:37.669176   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.671] I0211 15:30:37.666000   54036 establishing_controller.go:84] Shutting down EstablishingController
W0211 15:30:37.672] W0211 15:30:37.669182   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.672] I0211 15:30:37.666010   54036 autoregister_controller.go:160] Shutting down autoregister controller
W0211 15:30:37.672] I0211 15:30:37.666016   54036 crdregistration_controller.go:143] Shutting down crd-autoregister controller
W0211 15:30:37.673] I0211 15:30:37.666022   54036 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
W0211 15:30:37.673] I0211 15:30:37.666030   54036 available_controller.go:328] Shutting down AvailableConditionController
W0211 15:30:37.673] I0211 15:30:37.666257   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.673] I0211 15:30:37.669288   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 21 lines ...
W0211 15:30:37.676] I0211 15:30:37.666916   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.676] I0211 15:30:37.669429   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.676] I0211 15:30:37.666981   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.676] I0211 15:30:37.669443   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.677] I0211 15:30:37.666992   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.677] I0211 15:30:37.669456   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.677] W0211 15:30:37.667121   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.677] W0211 15:30:37.667269   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.677] I0211 15:30:37.667322   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.677] I0211 15:30:37.669522   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.677] I0211 15:30:37.669548   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.678] I0211 15:30:37.669562   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.678] I0211 15:30:37.669573   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.678] W0211 15:30:37.667345   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.678] I0211 15:30:37.667352   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.678] W0211 15:30:37.667371   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.678] I0211 15:30:37.667391   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.679] W0211 15:30:37.667392   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.679] W0211 15:30:37.667448   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.679] W0211 15:30:37.667451   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.679] I0211 15:30:37.667464   54036 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0211 15:30:37.679] W0211 15:30:37.667518   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.679] W0211 15:30:37.667706   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.680] W0211 15:30:37.667741   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.680] W0211 15:30:37.667744   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.680] I0211 15:30:37.667902   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.680] I0211 15:30:37.667972   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.680] I0211 15:30:37.667992   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.680] I0211 15:30:37.668012   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.681] I0211 15:30:37.668118   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.681] I0211 15:30:37.668140   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 25 lines ...
W0211 15:30:37.684] I0211 15:30:37.665985   54036 naming_controller.go:295] Shutting down NamingConditionController
W0211 15:30:37.684] I0211 15:30:37.669955   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.684] I0211 15:30:37.669976   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.684] I0211 15:30:37.669992   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.685] I0211 15:30:37.670163   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.685] I0211 15:30:37.670189   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.685] W0211 15:30:37.670197   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.685] I0211 15:30:37.670231   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.685] I0211 15:30:37.670254   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.685] I0211 15:30:37.670257   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.685] I0211 15:30:37.670258   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.686] I0211 15:30:37.670275   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.686] I0211 15:30:37.670389   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.686] I0211 15:30:37.670391   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.686] I0211 15:30:37.670450   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.686] I0211 15:30:37.670481   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.686] I0211 15:30:37.670508   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.686] I0211 15:30:37.670523   54036 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0211 15:30:37.687] W0211 15:30:37.670568   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.687] W0211 15:30:37.670601   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.687] W0211 15:30:37.670615   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.687] W0211 15:30:37.670623   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.687] W0211 15:30:37.670648   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.688] W0211 15:30:37.670653   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.688] W0211 15:30:37.670656   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.688] W0211 15:30:37.670670   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.688] W0211 15:30:37.670670   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.688] W0211 15:30:37.670685   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.689] W0211 15:30:37.670690   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.689] W0211 15:30:37.670690   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.689] W0211 15:30:37.670714   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.689] W0211 15:30:37.670717   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.689] W0211 15:30:37.670717   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.690] W0211 15:30:37.670724   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.690] I0211 15:30:37.670732   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.690] W0211 15:30:37.670740   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.690] W0211 15:30:37.670747   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.690] W0211 15:30:37.670744   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.690] W0211 15:30:37.670750   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.691] I0211 15:30:37.670757   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.691] W0211 15:30:37.670760   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.691] W0211 15:30:37.670769   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.691] W0211 15:30:37.670772   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.691] W0211 15:30:37.670780   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.692] W0211 15:30:37.670786   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.692] I0211 15:30:37.670787   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.692] W0211 15:30:37.670792   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.692] W0211 15:30:37.670798   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.692] W0211 15:30:37.670808   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.693] W0211 15:30:37.670814   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.693] W0211 15:30:37.670818   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.693] W0211 15:30:37.670824   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.693] W0211 15:30:37.670828   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.693] W0211 15:30:37.670835   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.694] W0211 15:30:37.670849   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.694] W0211 15:30:37.670877   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.694] W0211 15:30:37.670859   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.694] I0211 15:30:37.670881   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.694] W0211 15:30:37.670892   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.695] W0211 15:30:37.670899   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.695] W0211 15:30:37.670904   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.695] W0211 15:30:37.670919   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.695] W0211 15:30:37.670920   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.695] W0211 15:30:37.670915   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.696] W0211 15:30:37.670931   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.696] W0211 15:30:37.670936   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.696] W0211 15:30:37.670941   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.696] W0211 15:30:37.670946   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.696] W0211 15:30:37.670953   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.697] W0211 15:30:37.670958   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.697] W0211 15:30:37.670963   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.697] W0211 15:30:37.670973   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.697] W0211 15:30:37.670978   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.697] W0211 15:30:37.670979   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.698] W0211 15:30:37.670992   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.698] W0211 15:30:37.671005   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.698] W0211 15:30:37.671032   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.698] W0211 15:30:37.671041   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.698] W0211 15:30:37.671058   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.699] I0211 15:30:37.671088   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.699] I0211 15:30:37.671425   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.699] I0211 15:30:37.671456   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.699] I0211 15:30:37.671567   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.699] I0211 15:30:37.671585   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.699] I0211 15:30:37.671601   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 43 lines ...
W0211 15:30:37.705] I0211 15:30:37.673476   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.732] + make test-integration
W0211 15:30:37.799] I0211 15:30:37.799038   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0211 15:30:37.800] I0211 15:30:37.799105   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.800] W0211 15:30:37.799136   54036 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0211 15:30:37.800] I0211 15:30:37.799195   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.800] W0211 15:30:37.799436   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:37.801] I0211 15:30:37.800604   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0211 15:30:37.801] I0211 15:30:37.800647   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.801] W0211 15:30:37.800795   54036 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0211 15:30:37.801] I0211 15:30:37.800805   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 15:30:37.801] W0211 15:30:37.800887   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0211 15:30:37.902] No resources found
I0211 15:30:37.902] No resources found
I0211 15:30:37.902] +++ [0211 15:30:37] TESTS PASSED
I0211 15:30:37.902] junit report dir: /workspace/artifacts
I0211 15:30:37.902] +++ [0211 15:30:37] Clean up complete
W0211 15:30:38.667] W0211 15:30:38.666552   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.667] W0211 15:30:38.666571   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.667] W0211 15:30:38.666570   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.668] W0211 15:30:38.666814   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.668] W0211 15:30:38.666995   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.668] W0211 15:30:38.666837   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.668] W0211 15:30:38.666852   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.668] W0211 15:30:38.666924   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.669] W0211 15:30:38.667042   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.669] W0211 15:30:38.667050   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.669] W0211 15:30:38.667222   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.669] W0211 15:30:38.667232   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.669] W0211 15:30:38.667265   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.670] W0211 15:30:38.667371   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.670] W0211 15:30:38.667423   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.670] W0211 15:30:38.667439   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.670] W0211 15:30:38.667476   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.670] W0211 15:30:38.667929   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.671] W0211 15:30:38.667950   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.671] W0211 15:30:38.668165   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.671] W0211 15:30:38.667979   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.671] W0211 15:30:38.668021   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.671] W0211 15:30:38.668062   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.672] W0211 15:30:38.668067   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.672] W0211 15:30:38.668270   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.672] W0211 15:30:38.668278   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.672] W0211 15:30:38.668313   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.673] W0211 15:30:38.668591   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.673] W0211 15:30:38.668631   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.673] W0211 15:30:38.668717   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.673] W0211 15:30:38.668803   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.673] W0211 15:30:38.669086   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.674] W0211 15:30:38.669118   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.674] W0211 15:30:38.668990   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.674] W0211 15:30:38.669014   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.674] W0211 15:30:38.669050   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.674] W0211 15:30:38.669046   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.675] W0211 15:30:38.669077   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.675] W0211 15:30:38.669224   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.675] W0211 15:30:38.669515   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.675] W0211 15:30:38.669529   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.675] W0211 15:30:38.669552   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.676] W0211 15:30:38.669565   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.676] W0211 15:30:38.669576   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.676] W0211 15:30:38.669607   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.676] W0211 15:30:38.669607   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.676] W0211 15:30:38.669604   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.677] W0211 15:30:38.669520   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.677] W0211 15:30:38.669754   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.677] W0211 15:30:38.669782   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.677] W0211 15:30:38.669965   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.677] W0211 15:30:38.670003   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.678] W0211 15:30:38.670291   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.678] W0211 15:30:38.670044   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.678] W0211 15:30:38.670054   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.678] W0211 15:30:38.670348   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.678] W0211 15:30:38.670131   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.679] W0211 15:30:38.670370   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.679] W0211 15:30:38.670192   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.679] W0211 15:30:38.670225   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.679] W0211 15:30:38.670448   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.679] W0211 15:30:38.670509   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.680] W0211 15:30:38.670509   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.680] W0211 15:30:38.670762   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.680] W0211 15:30:38.670839   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.680] W0211 15:30:38.670893   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.680] W0211 15:30:38.670956   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.681] W0211 15:30:38.671118   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.800] W0211 15:30:38.799569   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:38.801] W0211 15:30:38.801144   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:39.965] W0211 15:30:39.964276   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:39.977] W0211 15:30:39.976945   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:39.980] W0211 15:30:39.979918   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:39.981] W0211 15:30:39.980468   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:39.994] W0211 15:30:39.993617   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.003] W0211 15:30:40.003086   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.007] W0211 15:30:40.006941   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.007] W0211 15:30:40.007434   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.017] W0211 15:30:40.017306   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.025] W0211 15:30:40.024755   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.034] W0211 15:30:40.033770   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.043] W0211 15:30:40.042405   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.052] W0211 15:30:40.052216   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.070] W0211 15:30:40.069394   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.085] W0211 15:30:40.085189   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.096] W0211 15:30:40.095697   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.103] W0211 15:30:40.102907   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.105] W0211 15:30:40.105248   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.110] W0211 15:30:40.109654   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.113] W0211 15:30:40.113177   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.115] W0211 15:30:40.114761   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.120] W0211 15:30:40.119889   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.131] W0211 15:30:40.131003   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.138] W0211 15:30:40.137608   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.141] W0211 15:30:40.141071   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.150] W0211 15:30:40.149954   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.157] W0211 15:30:40.157318   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.162] W0211 15:30:40.162246   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.167] W0211 15:30:40.166859   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.175] W0211 15:30:40.174518   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.195] W0211 15:30:40.194668   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.212] W0211 15:30:40.211776   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.215] W0211 15:30:40.215078   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.216] W0211 15:30:40.216229   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.232] W0211 15:30:40.231824   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.232] W0211 15:30:40.231828   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.233] W0211 15:30:40.232442   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.240] W0211 15:30:40.239589   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.252] W0211 15:30:40.251373   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.253] W0211 15:30:40.253172   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.262] W0211 15:30:40.261709   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.262] W0211 15:30:40.261724   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.297] W0211 15:30:40.297131   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.308] W0211 15:30:40.307812   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.323] W0211 15:30:40.322803   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.323] W0211 15:30:40.323120   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.340] W0211 15:30:40.339786   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.362] W0211 15:30:40.361937   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.376] W0211 15:30:40.376012   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.379] W0211 15:30:40.378467   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.390] W0211 15:30:40.389678   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.405] W0211 15:30:40.404355   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.414] W0211 15:30:40.413802   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.429] W0211 15:30:40.428679   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.434] W0211 15:30:40.433637   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.447] W0211 15:30:40.446560   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.459] W0211 15:30:40.458621   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.466] W0211 15:30:40.465569   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.511] W0211 15:30:40.510476   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.514] W0211 15:30:40.513898   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.522] W0211 15:30:40.521717   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.527] W0211 15:30:40.526746   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.530] W0211 15:30:40.530132   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.551] W0211 15:30:40.550301   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.556] W0211 15:30:40.555566   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.565] W0211 15:30:40.564656   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.573] W0211 15:30:40.573083   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.575] W0211 15:30:40.575146   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.585] W0211 15:30:40.584607   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 15:30:40.585] W0211 15:30:40.584999   54036 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0211 15:30:42.008] +++ [0211 15:30:42] Checking etcd is on PATH
I0211 15:30:42.009] /workspace/kubernetes/third_party/etcd/etcd
I0211 15:30:42.012] +++ [0211 15:30:42] Starting etcd instance
I0211 15:30:42.063] etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.FrokuzoomQ --listen-client-urls http://127.0.0.1:2379 --debug > "/workspace/artifacts/etcd.153e4276385c.root.log.DEBUG.20190211-153042.96953" 2>/dev/null
I0211 15:30:42.063] Waiting for etcd to come up.
W0211 15:30:42.163] I0211 15:30:42.153441   54036 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 37 lines ...
I0211 15:30:42.892] +++ [0211 15:30:42] On try 2, etcd: : http://127.0.0.1:2379
I0211 15:30:42.893] {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
I0211 15:30:42.893] +++ [0211 15:30:42] Running integration test cases
I0211 15:30:47.228] Running tests for APIVersion: v1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1beta1,apps/v1,apps/v1beta1,apps/v1beta2,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,autoscaling/v2beta2,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,coordination.k8s.io/v1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,
I0211 15:30:47.265] +++ [0211 15:30:47] Running tests without code coverage
W0211 15:31:08.695] # k8s.io/kubernetes/test/integration/apimachinery [k8s.io/kubernetes/test/integration/apimachinery.test]
W0211 15:31:08.696] test/integration/apimachinery/watch_restart_test.go:179:4: cannot use func literal (type func(*kubernetes.Clientset, *"k8s.io/kubernetes/vendor/k8s.io/api/core/v1".Secret) ("k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch".Interface, error)) as type func(*kubernetes.Clientset, *"k8s.io/kubernetes/vendor/k8s.io/api/core/v1".Secret) ("k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch".Interface, error, func()) in field value
I0211 15:42:15.429] FAIL	k8s.io/kubernetes/test/integration/apimachinery [build failed]
I0211 15:42:15.430] ok  	k8s.io/kubernetes/test/integration/apiserver	47.641s
I0211 15:42:15.430] ok  	k8s.io/kubernetes/test/integration/apiserver/apply	24.668s
I0211 15:42:15.430] ok  	k8s.io/kubernetes/test/integration/auth	103.063s
I0211 15:42:15.430] ok  	k8s.io/kubernetes/test/integration/client	69.684s
I0211 15:42:15.430] ok  	k8s.io/kubernetes/test/integration/configmap	7.039s
I0211 15:42:15.430] ok  	k8s.io/kubernetes/test/integration/cronjob	57.945s
... skipping 27 lines ...
I0211 15:42:15.433] ok  	k8s.io/kubernetes/test/integration/storageclasses	5.072s
I0211 15:42:15.433] ok  	k8s.io/kubernetes/test/integration/tls	9.300s
I0211 15:42:15.433] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	10.904s
I0211 15:42:15.434] ok  	k8s.io/kubernetes/test/integration/volume	92.157s
I0211 15:42:15.434] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	143.644s
I0211 15:42:27.614] +++ [0211 15:42:27] Saved JUnit XML test report to /workspace/artifacts/junit_642613dbe8fbf016c1770a7007e34bb12666c617_20190211-153047.xml
I0211 15:42:27.617] Makefile:184: recipe for target 'test' failed
I0211 15:42:27.628] +++ [0211 15:42:27] Cleaning up etcd
W0211 15:42:27.729] make[1]: *** [test] Error 1
W0211 15:42:27.729] !!! [0211 15:42:27] Call tree:
W0211 15:42:27.729] !!! [0211 15:42:27]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0211 15:42:27.971] +++ [0211 15:42:27] Integration test cleanup complete
I0211 15:42:27.972] Makefile:203: recipe for target 'test-integration' failed
W0211 15:42:28.073] make: *** [test-integration] Error 1
W0211 15:42:29.779] Traceback (most recent call last):
W0211 15:42:29.780]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0211 15:42:29.780]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0211 15:42:29.780]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0211 15:42:29.780]     check(*cmd)
W0211 15:42:29.780]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0211 15:42:29.781]     subprocess.check_call(cmd)
W0211 15:42:29.781]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0211 15:42:29.796]     raise CalledProcessError(retcode, cmd)
W0211 15:42:29.797] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20190125-cc5d6ecff3', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0211 15:42:29.804] Command failed
I0211 15:42:29.804] process 671 exited with code 1 after 26.1m
E0211 15:42:29.805] FAIL: pull-kubernetes-integration
I0211 15:42:29.805] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0211 15:42:30.360] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0211 15:42:30.403] process 127210 exited with code 0 after 0.0m
I0211 15:42:30.403] Call:  gcloud config get-value account
I0211 15:42:30.695] process 127222 exited with code 0 after 0.0m
I0211 15:42:30.696] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0211 15:42:30.696] Upload result and artifacts...
I0211 15:42:30.696] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/67350/pull-kubernetes-integration/44324
I0211 15:42:30.697] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/67350/pull-kubernetes-integration/44324/artifacts
W0211 15:42:31.868] CommandException: One or more URLs matched no objects.
E0211 15:42:32.012] Command failed
I0211 15:42:32.012] process 127234 exited with code 1 after 0.0m
W0211 15:42:32.012] Remote dir gs://kubernetes-jenkins/pr-logs/pull/67350/pull-kubernetes-integration/44324/artifacts not exist yet
I0211 15:42:32.013] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/67350/pull-kubernetes-integration/44324/artifacts
I0211 15:42:36.469] process 127376 exited with code 0 after 0.1m
W0211 15:42:36.470] metadata path /workspace/_artifacts/metadata.json does not exist
W0211 15:42:36.470] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...