This job view page is being replaced by Spyglass soon. Check out the new job view.
PRcaesarxuchao: Adding a limit on the size of request body the apiserver will decode for write operations
ResultFAILURE
Tests 1 failed / 119 succeeded
Started2019-02-11 17:51
Elapsed15m17s
Revision
Buildergke-prow-containerd-pool-99179761-9sg5
Refs master:986399b8
73805:ee787c80
pod9253dda6-2e25-11e9-8de8-0a580a6c0524
infra-commitbd83b1c78
pod9253dda6-2e25-11e9-8de8-0a580a6c0524
repok8s.io/kubernetes
repo-commit3f9b5b36eadaec09e35db4085293ef88c9606ee9
repos{u'k8s.io/kubernetes': u'master:986399b8909f6dccd1c84d40af5fc7fa546c193f,73805:ee787c80c50f61015d33cfcc3228297384c1c187'}

Test Failures


test-cmd run_crd_tests 33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=test\-cmd\srun\_crd\_tests$'
/go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 72959 Killed                  while [ ${tries} -lt 10 ]; do
    tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
done
/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 72958 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
!!! [0211 18:03:17] Call tree:
!!! [0211 18:03:17]  1: /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh:443 kube::test::get_object_assert(...)
!!! [0211 18:03:17]  2: /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh:133 run_non_native_resource_tests(...)
!!! [0211 18:03:17]  3: /go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 run_crd_tests(...)
!!! [0211 18:03:17]  4: /go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0211 18:03:17]  5: /go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:134 juLog(...)
!!! [0211 18:03:17]  6: /go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:517 record_command(...)
!!! [0211 18:03:17]  7: hack/make-rules/test-cmd.sh:109 runTests(...)
				
				Click to see stdout/stderrfrom junit_test-cmd.xml

Filter through log files | View test history on testgrid


Show 119 Passed Tests

Error lines from build-log.txt

... skipping 320 lines ...
W0211 17:59:55.639] I0211 17:59:55.638988   54033 serving.go:311] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0211 17:59:55.640] I0211 17:59:55.639109   54033 server.go:561] external host was not specified, using 172.17.0.2
W0211 17:59:55.640] W0211 17:59:55.639122   54033 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0211 17:59:55.640] I0211 17:59:55.639455   54033 server.go:146] Version: v1.14.0-alpha.2.528+3f9b5b36eadaec
W0211 17:59:56.337] I0211 17:59:56.336910   54033 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 17:59:56.338] I0211 17:59:56.336939   54033 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 17:59:56.338] E0211 17:59:56.337449   54033 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 17:59:56.338] E0211 17:59:56.337522   54033 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 17:59:56.338] E0211 17:59:56.337564   54033 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 17:59:56.339] E0211 17:59:56.337692   54033 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 17:59:56.339] E0211 17:59:56.337766   54033 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 17:59:56.339] E0211 17:59:56.337856   54033 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 17:59:56.340] I0211 17:59:56.337910   54033 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 17:59:56.340] I0211 17:59:56.337922   54033 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 17:59:56.340] I0211 17:59:56.339569   54033 clientconn.go:551] parsed scheme: ""
W0211 17:59:56.340] I0211 17:59:56.339596   54033 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 17:59:56.340] I0211 17:59:56.339649   54033 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 17:59:56.341] I0211 17:59:56.339784   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 335 lines ...
W0211 17:59:56.727] W0211 17:59:56.727198   54033 genericapiserver.go:334] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0211 17:59:57.336] I0211 17:59:57.335938   54033 clientconn.go:551] parsed scheme: ""
W0211 17:59:57.337] I0211 17:59:57.335980   54033 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 17:59:57.337] I0211 17:59:57.336032   54033 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 17:59:57.337] I0211 17:59:57.336085   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 17:59:57.337] I0211 17:59:57.336734   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 17:59:57.618] E0211 17:59:57.617477   54033 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 17:59:57.618] E0211 17:59:57.617540   54033 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 17:59:57.619] E0211 17:59:57.617620   54033 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 17:59:57.619] E0211 17:59:57.617654   54033 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 17:59:57.619] E0211 17:59:57.617670   54033 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 17:59:57.619] E0211 17:59:57.617698   54033 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 17:59:57.620] I0211 17:59:57.617718   54033 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 17:59:57.620] I0211 17:59:57.617730   54033 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 17:59:57.620] I0211 17:59:57.619058   54033 clientconn.go:551] parsed scheme: ""
W0211 17:59:57.620] I0211 17:59:57.619083   54033 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 17:59:57.620] I0211 17:59:57.619133   54033 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 17:59:57.621] I0211 17:59:57.619172   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 169 lines ...
W0211 18:00:37.392] I0211 18:00:37.389376   57378 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for serviceaccounts
W0211 18:00:37.392] I0211 18:00:37.389411   57378 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.extensions
W0211 18:00:37.393] I0211 18:00:37.389483   57378 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for statefulsets.apps
W0211 18:00:37.393] I0211 18:00:37.389540   57378 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
W0211 18:00:37.393] I0211 18:00:37.389588   57378 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0211 18:00:37.393] I0211 18:00:37.389620   57378 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W0211 18:00:37.393] E0211 18:00:37.389700   57378 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0211 18:00:37.394] I0211 18:00:37.389732   57378 controllermanager.go:493] Started "resourcequota"
W0211 18:00:37.394] I0211 18:00:37.389782   57378 resource_quota_controller.go:276] Starting resource quota controller
W0211 18:00:37.394] I0211 18:00:37.389895   57378 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0211 18:00:37.394] I0211 18:00:37.389924   57378 resource_quota_monitor.go:301] QuotaMonitor running
I0211 18:00:37.495] +++ [0211 18:00:37] On try 3, controller-manager: ok
W0211 18:00:37.595] I0211 18:00:37.496480   57378 controllermanager.go:493] Started "garbagecollector"
W0211 18:00:37.596] I0211 18:00:37.496519   57378 garbagecollector.go:130] Starting garbage collector controller
W0211 18:00:37.596] I0211 18:00:37.496563   57378 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 18:00:37.596] I0211 18:00:37.496628   57378 graph_builder.go:308] GraphBuilder running
W0211 18:00:37.596] E0211 18:00:37.497200   57378 prometheus.go:138] failed to register depth metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_depth", help: "(Deprecated) Current depth of workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_depth" is not a valid metric name
W0211 18:00:37.597] E0211 18:00:37.497272   57378 prometheus.go:150] failed to register adds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_adds", help: "(Deprecated) Total number of adds handled by workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_adds" is not a valid metric name
W0211 18:00:37.597] E0211 18:00:37.497328   57378 prometheus.go:162] failed to register latency metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_queue_latency", help: "(Deprecated) How long an item stays in workqueuedisruption-recheck before being requested.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_queue_latency" is not a valid metric name
W0211 18:00:37.598] E0211 18:00:37.497418   57378 prometheus.go:174] failed to register work_duration metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_work_duration", help: "(Deprecated) How long processing an item from workqueuedisruption-recheck takes.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_work_duration" is not a valid metric name
W0211 18:00:37.598] E0211 18:00:37.497457   57378 prometheus.go:189] failed to register unfinished_work_seconds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_unfinished_work_seconds", help: "(Deprecated) How many seconds of work disruption-recheck has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_unfinished_work_seconds" is not a valid metric name
W0211 18:00:37.599] E0211 18:00:37.497514   57378 prometheus.go:202] failed to register longest_running_processor_microseconds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for disruption-recheck been running.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_longest_running_processor_microseconds" is not a valid metric name
W0211 18:00:37.599] E0211 18:00:37.497564   57378 prometheus.go:214] failed to register retries metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_retries", help: "(Deprecated) Total number of retries handled by workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_retries" is not a valid metric name
W0211 18:00:37.599] I0211 18:00:37.497690   57378 controllermanager.go:493] Started "disruption"
W0211 18:00:37.599] I0211 18:00:37.497719   57378 disruption.go:286] Starting disruption controller
W0211 18:00:37.599] I0211 18:00:37.497730   57378 controller_utils.go:1021] Waiting for caches to sync for disruption controller
W0211 18:00:37.600] I0211 18:00:37.498385   57378 controllermanager.go:493] Started "endpoint"
W0211 18:00:37.600] I0211 18:00:37.498417   57378 endpoints_controller.go:155] Starting endpoint controller
W0211 18:00:37.600] I0211 18:00:37.498431   57378 controller_utils.go:1021] Waiting for caches to sync for endpoint controller
... skipping 59 lines ...
W0211 18:00:37.610] I0211 18:00:37.523609   57378 serviceaccounts_controller.go:115] Starting service account controller
W0211 18:00:37.610] I0211 18:00:37.524083   57378 controller_utils.go:1021] Waiting for caches to sync for service account controller
W0211 18:00:37.610] I0211 18:00:37.525227   57378 controllermanager.go:493] Started "job"
W0211 18:00:37.611] W0211 18:00:37.525274   57378 controllermanager.go:485] Skipping "csrsigning"
W0211 18:00:37.611] I0211 18:00:37.525278   57378 job_controller.go:143] Starting job controller
W0211 18:00:37.611] I0211 18:00:37.525292   57378 controller_utils.go:1021] Waiting for caches to sync for job controller
W0211 18:00:37.611] E0211 18:00:37.525934   57378 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0211 18:00:37.611] W0211 18:00:37.525961   57378 controllermanager.go:485] Skipping "service"
W0211 18:00:37.611] I0211 18:00:37.527038   57378 controllermanager.go:493] Started "replicationcontroller"
W0211 18:00:37.612] I0211 18:00:37.527262   57378 replica_set.go:182] Starting replicationcontroller controller
W0211 18:00:37.612] I0211 18:00:37.527288   57378 controller_utils.go:1021] Waiting for caches to sync for ReplicationController controller
W0211 18:00:37.612] I0211 18:00:37.528337   57378 controllermanager.go:493] Started "csrapproving"
W0211 18:00:37.612] I0211 18:00:37.528600   57378 node_lifecycle_controller.go:77] Sending events to api server
W0211 18:00:37.612] I0211 18:00:37.528604   57378 certificate_controller.go:113] Starting certificate controller
W0211 18:00:37.612] I0211 18:00:37.528680   57378 controller_utils.go:1021] Waiting for caches to sync for certificate controller
W0211 18:00:37.613] E0211 18:00:37.528680   57378 core.go:162] failed to start cloud node lifecycle controller: no cloud provider provided
W0211 18:00:37.613] W0211 18:00:37.528688   57378 controllermanager.go:485] Skipping "cloud-node-lifecycle"
W0211 18:00:37.613] I0211 18:00:37.529145   57378 controllermanager.go:493] Started "pv-protection"
W0211 18:00:37.613] I0211 18:00:37.529198   57378 pv_protection_controller.go:81] Starting PV protection controller
W0211 18:00:37.613] I0211 18:00:37.529253   57378 controller_utils.go:1021] Waiting for caches to sync for PV protection controller
W0211 18:00:37.613] I0211 18:00:37.610509   57378 controller_utils.go:1028] Caches are synced for namespace controller
W0211 18:00:37.614] I0211 18:00:37.613805   57378 controller_utils.go:1028] Caches are synced for PVC protection controller
... skipping 5 lines ...
W0211 18:00:37.626] I0211 18:00:37.625606   57378 controller_utils.go:1028] Caches are synced for job controller
W0211 18:00:37.628] I0211 18:00:37.627543   57378 controller_utils.go:1028] Caches are synced for ReplicationController controller
W0211 18:00:37.629] I0211 18:00:37.628885   57378 controller_utils.go:1028] Caches are synced for certificate controller
W0211 18:00:37.631] I0211 18:00:37.630874   57378 controller_utils.go:1028] Caches are synced for service account controller
W0211 18:00:37.634] I0211 18:00:37.634046   54033 controller.go:606] quota admission added evaluator for: serviceaccounts
W0211 18:00:37.699] I0211 18:00:37.698641   57378 controller_utils.go:1028] Caches are synced for endpoint controller
W0211 18:00:37.711] W0211 18:00:37.711011   57378 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0211 18:00:37.715] I0211 18:00:37.715188   57378 controller_utils.go:1028] Caches are synced for persistent volume controller
W0211 18:00:37.716] I0211 18:00:37.716050   57378 controller_utils.go:1028] Caches are synced for daemon sets controller
W0211 18:00:37.723] I0211 18:00:37.722806   57378 controller_utils.go:1028] Caches are synced for attach detach controller
W0211 18:00:37.723] I0211 18:00:37.722812   57378 controller_utils.go:1028] Caches are synced for expand controller
W0211 18:00:37.730] I0211 18:00:37.729743   57378 controller_utils.go:1028] Caches are synced for PV protection controller
W0211 18:00:37.798] I0211 18:00:37.798009   57378 controller_utils.go:1028] Caches are synced for disruption controller
... skipping 9 lines ...
W0211 18:00:38.101] I0211 18:00:38.100384   57378 taint_manager.go:198] Starting NoExecuteTaintManager
W0211 18:00:38.101] I0211 18:00:38.100483   57378 node_lifecycle_controller.go:963] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0211 18:00:38.101] I0211 18:00:38.100558   57378 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"efb9b2ac-2e26-11e9-8b3e-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W0211 18:00:38.197] I0211 18:00:38.196909   57378 controller_utils.go:1028] Caches are synced for garbage collector controller
W0211 18:00:38.198] I0211 18:00:38.196950   57378 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W0211 18:00:38.217] I0211 18:00:38.217195   57378 controller_utils.go:1028] Caches are synced for ClusterRoleAggregator controller
W0211 18:00:38.228] E0211 18:00:38.227254   57378 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0211 18:00:38.234] E0211 18:00:38.233561   57378 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0211 18:00:38.240] E0211 18:00:38.239256   57378 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0211 18:00:38.262] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
I0211 18:00:38.371] NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
I0211 18:00:38.371] kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   38s
I0211 18:00:38.378] Recording: run_kubectl_version_tests
I0211 18:00:38.379] Running command: run_kubectl_version_tests
I0211 18:00:38.411] 
... skipping 12 lines ...
I0211 18:00:38.519]   "compiler": "gc",
I0211 18:00:38.519]   "platform": "linux/amd64"
I0211 18:00:38.707] }+++ [0211 18:00:38] Testing kubectl version: check client only output matches expected output
I0211 18:00:38.890] Successful: the flag '--client' shows correct client info
I0211 18:00:38.900] (BSuccessful: the flag '--client' correctly has no server version info
I0211 18:00:38.904] (B+++ [0211 18:00:38] Testing kubectl version: verify json output
W0211 18:00:39.004] E0211 18:00:38.939002   57378 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0211 18:00:39.005] I0211 18:00:38.992433   57378 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 18:00:39.093] I0211 18:00:39.092863   57378 controller_utils.go:1028] Caches are synced for garbage collector controller
I0211 18:00:39.194] Successful: --output json has correct client info
I0211 18:00:39.194] (BSuccessful: --output json has correct server info
I0211 18:00:39.194] (B+++ [0211 18:00:39] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I0211 18:00:39.272] Successful: --client --output json has correct client info
... skipping 50 lines ...
I0211 18:00:42.787] +++ working dir: /go/src/k8s.io/kubernetes
I0211 18:00:42.791] +++ command: run_RESTMapper_evaluation_tests
I0211 18:00:42.808] +++ [0211 18:00:42] Creating namespace namespace-1549908042-4219
I0211 18:00:42.896] namespace/namespace-1549908042-4219 created
I0211 18:00:42.977] Context "test" modified.
I0211 18:00:42.988] +++ [0211 18:00:42] Testing RESTMapper
I0211 18:00:43.141] +++ [0211 18:00:43] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0211 18:00:43.167] +++ exit code: 0
I0211 18:00:43.334] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0211 18:00:43.335] bindings                                                                      true         Binding
I0211 18:00:43.335] componentstatuses                 cs                                          false        ComponentStatus
I0211 18:00:43.335] configmaps                        cm                                          true         ConfigMap
I0211 18:00:43.335] endpoints                         ep                                          true         Endpoints
... skipping 585 lines ...
I0211 18:01:05.144] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 18:01:05.350] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 18:01:05.465] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 18:01:05.669] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 18:01:05.779] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 18:01:05.881] (Bpod "valid-pod" force deleted
W0211 18:01:05.982] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0211 18:01:05.982] error: setting 'all' parameter but found a non empty selector. 
W0211 18:01:05.982] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 18:01:06.083] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0211 18:01:06.121] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0211 18:01:06.204] (Bnamespace/test-kubectl-describe-pod created
I0211 18:01:06.321] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0211 18:01:06.432] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0211 18:01:07.578] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0211 18:01:07.704] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0211 18:01:07.792] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0211 18:01:07.919] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0211 18:01:08.148] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:01:08.377] (Bpod/env-test-pod created
W0211 18:01:08.478] error: min-available and max-unavailable cannot be both specified
I0211 18:01:08.663] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0211 18:01:08.663] Name:               env-test-pod
I0211 18:01:08.663] Namespace:          test-kubectl-describe-pod
I0211 18:01:08.664] Priority:           0
I0211 18:01:08.664] PriorityClassName:  <none>
I0211 18:01:08.664] Node:               <none>
... skipping 145 lines ...
I0211 18:01:22.228] (Bservice "modified" deleted
I0211 18:01:22.329] replicationcontroller "modified" deleted
I0211 18:01:22.661] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:01:22.858] (Bpod/valid-pod created
I0211 18:01:22.980] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 18:01:23.164] (BSuccessful
I0211 18:01:23.164] message:Error from server: cannot restore map from string
I0211 18:01:23.164] has:cannot restore map from string
W0211 18:01:23.265] E0211 18:01:23.152583   54033 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0211 18:01:23.365] Successful
I0211 18:01:23.366] message:pod/valid-pod patched (no change)
I0211 18:01:23.366] has:patched (no change)
I0211 18:01:23.370] pod/valid-pod patched
I0211 18:01:23.496] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 18:01:23.616] (Bcore.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
... skipping 4 lines ...
I0211 18:01:24.127] (Bpod/valid-pod patched
I0211 18:01:24.246] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0211 18:01:24.333] (Bpod/valid-pod patched
I0211 18:01:24.461] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0211 18:01:24.649] (Bpod/valid-pod patched
I0211 18:01:24.784] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 18:01:25.001] (B+++ [0211 18:01:24] "kubectl patch with resourceVersion 501" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0211 18:01:25.327] pod "valid-pod" deleted
I0211 18:01:25.338] pod/valid-pod replaced
I0211 18:01:25.472] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0211 18:01:25.683] (BSuccessful
I0211 18:01:25.684] message:error: --grace-period must have --force specified
I0211 18:01:25.684] has:\-\-grace-period must have \-\-force specified
I0211 18:01:25.908] Successful
I0211 18:01:25.909] message:error: --timeout must have --force specified
I0211 18:01:25.909] has:\-\-timeout must have \-\-force specified
I0211 18:01:26.102] node/node-v1-test created
W0211 18:01:26.203] W0211 18:01:26.101990   57378 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0211 18:01:26.322] node/node-v1-test replaced
I0211 18:01:26.448] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0211 18:01:26.533] (Bnode "node-v1-test" deleted
I0211 18:01:26.659] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 18:01:27.016] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0211 18:01:28.283] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 16 lines ...
I0211 18:01:28.577]     name: kubernetes-pause
I0211 18:01:28.577] has:localonlyvalue
I0211 18:01:28.604] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0211 18:01:28.820] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0211 18:01:28.928] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0211 18:01:29.029] (Bpod/valid-pod labeled
W0211 18:01:29.130] error: 'name' already has a value (valid-pod), and --overwrite is false
I0211 18:01:29.231] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0211 18:01:29.284] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 18:01:29.382] (Bpod "valid-pod" force deleted
W0211 18:01:29.483] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 18:01:29.584] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:01:29.584] (B+++ [0211 18:01:29] Creating namespace namespace-1549908089-751
... skipping 82 lines ...
I0211 18:01:37.896] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0211 18:01:37.899] +++ working dir: /go/src/k8s.io/kubernetes
I0211 18:01:37.902] +++ command: run_kubectl_create_error_tests
I0211 18:01:37.917] +++ [0211 18:01:37] Creating namespace namespace-1549908097-12305
I0211 18:01:37.995] namespace/namespace-1549908097-12305 created
I0211 18:01:38.084] Context "test" modified.
I0211 18:01:38.094] +++ [0211 18:01:38] Testing kubectl create with error
W0211 18:01:38.194] Error: required flag(s) "filename" not set
W0211 18:01:38.194] 
W0211 18:01:38.194] 
W0211 18:01:38.195] Examples:
W0211 18:01:38.195]   # Create a pod using the data in pod.json.
W0211 18:01:38.195]   kubectl create -f ./pod.json
W0211 18:01:38.195]   
... skipping 38 lines ...
W0211 18:01:38.199]   kubectl create -f FILENAME [options]
W0211 18:01:38.199] 
W0211 18:01:38.199] Use "kubectl <command> --help" for more information about a given command.
W0211 18:01:38.199] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0211 18:01:38.199] 
W0211 18:01:38.199] required flag(s) "filename" not set
I0211 18:01:38.380] +++ [0211 18:01:38] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0211 18:01:38.480] kubectl convert is DEPRECATED and will be removed in a future version.
W0211 18:01:38.481] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0211 18:01:38.597] +++ exit code: 0
I0211 18:01:38.644] Recording: run_kubectl_apply_tests
I0211 18:01:38.645] Running command: run_kubectl_apply_tests
I0211 18:01:38.675] 
... skipping 21 lines ...
W0211 18:01:41.180] I0211 18:01:40.581415   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908098-11706", Name:"test-deployment-retainkeys", UID:"14c3e6bf-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"514", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-deployment-retainkeys-ddc987c6 to 1
W0211 18:01:41.181] I0211 18:01:40.584399   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908098-11706", Name:"test-deployment-retainkeys-ddc987c6", UID:"15333e82-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"518", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-deployment-retainkeys-ddc987c6-2xr9k
I0211 18:01:41.281] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:01:41.407] (Bpod/selector-test-pod created
I0211 18:01:41.539] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0211 18:01:41.647] (BSuccessful
I0211 18:01:41.648] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0211 18:01:41.648] has:pods "selector-test-pod-dont-apply" not found
I0211 18:01:41.735] pod "selector-test-pod" deleted
I0211 18:01:41.856] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:01:42.139] (Bpod/test-pod created (server dry run)
I0211 18:01:42.269] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:01:42.469] (Bpod/test-pod created
... skipping 5 lines ...
W0211 18:01:43.625] I0211 18:01:43.624803   54033 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 18:01:43.626] I0211 18:01:43.624834   54033 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 18:01:43.626] I0211 18:01:43.624873   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:01:43.626] I0211 18:01:43.625348   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:01:43.631] I0211 18:01:43.631255   54033 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0211 18:01:43.732] kind.mygroup.example.com/myobj created (server dry run)
W0211 18:01:43.833] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0211 18:01:43.934] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0211 18:01:43.988] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:01:44.195] (Bpod/a created
I0211 18:01:45.521] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0211 18:01:45.623] (BSuccessful
I0211 18:01:45.623] message:Error from server (NotFound): pods "b" not found
I0211 18:01:45.623] has:pods "b" not found
I0211 18:01:45.807] pod/b created
I0211 18:01:45.821] pod/a pruned
I0211 18:01:47.336] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0211 18:01:47.444] (BSuccessful
I0211 18:01:47.444] message:Error from server (NotFound): pods "a" not found
I0211 18:01:47.445] has:pods "a" not found
I0211 18:01:47.539] pod "b" deleted
I0211 18:01:47.663] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:01:47.866] (Bpod/a created
I0211 18:01:47.996] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0211 18:01:48.110] (BSuccessful
I0211 18:01:48.111] message:Error from server (NotFound): pods "b" not found
I0211 18:01:48.111] has:pods "b" not found
I0211 18:01:48.322] pod/b created
I0211 18:01:48.464] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0211 18:01:48.578] (Bapply.sh:166: Successful get pods b {{.metadata.name}}: b
I0211 18:01:48.665] (Bpod "a" deleted
I0211 18:01:48.670] pod "b" deleted
I0211 18:01:48.891] Successful
I0211 18:01:48.891] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0211 18:01:48.891] has:all resources selected for prune without explicitly passing --all
I0211 18:01:49.078] pod/a created
I0211 18:01:49.085] pod/b created
I0211 18:01:49.093] service/prune-svc created
I0211 18:01:50.423] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0211 18:01:50.538] (Bapply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 138 lines ...
I0211 18:02:03.604] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:02:03.794] (Bpod/selector-test-pod created
W0211 18:02:03.894] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0211 18:02:03.895] I0211 18:02:02.698484   54033 controller.go:606] quota admission added evaluator for: cronjobs.batch
I0211 18:02:03.995] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0211 18:02:04.020] (BSuccessful
I0211 18:02:04.021] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0211 18:02:04.021] has:pods "selector-test-pod-dont-apply" not found
I0211 18:02:04.112] pod "selector-test-pod" deleted
I0211 18:02:04.145] +++ exit code: 0
I0211 18:02:04.195] Recording: run_kubectl_apply_deployments_tests
I0211 18:02:04.196] Running command: run_kubectl_apply_deployments_tests
I0211 18:02:04.230] 
... skipping 38 lines ...
W0211 18:02:07.209] I0211 18:02:07.111251   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908124-32148", Name:"nginx", UID:"2502c4e8-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-776cc67f78 to 3
W0211 18:02:07.210] I0211 18:02:07.114511   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908124-32148", Name:"nginx-776cc67f78", UID:"25034b1f-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-7xfd8
W0211 18:02:07.210] I0211 18:02:07.116968   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908124-32148", Name:"nginx-776cc67f78", UID:"25034b1f-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-dpgn2
W0211 18:02:07.211] I0211 18:02:07.117132   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908124-32148", Name:"nginx-776cc67f78", UID:"25034b1f-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"712", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-4knmv
I0211 18:02:07.311] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0211 18:02:11.496] (BSuccessful
I0211 18:02:11.496] message:Error from server (Conflict): error when applying patch:
I0211 18:02:11.496] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1549908124-32148\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0211 18:02:11.497] to:
I0211 18:02:11.497] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0211 18:02:11.497] Name: "nginx", Namespace: "namespace-1549908124-32148"
I0211 18:02:11.498] Object: &{map["spec":map["replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["schedulerName":"default-scheduler" "containers":[map["name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["protocol":"TCP" "containerPort":'P']] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[]]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647)] "status":map["updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["type":"Available" "status":"False" "lastUpdateTime":"2019-02-11T18:02:07Z" "lastTransitionTime":"2019-02-11T18:02:07Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability."]] "observedGeneration":'\x01' "replicas":'\x03'] "kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["namespace":"namespace-1549908124-32148" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1549908124-32148/deployments/nginx" "uid":"2502c4e8-2e27-11e9-8b3e-0242ac110002" "generation":'\x01' "creationTimestamp":"2019-02-11T18:02:07Z" "labels":map["name":"nginx"] "name":"nginx" "resourceVersion":"724" "annotations":map["kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1549908124-32148\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n" "deployment.kubernetes.io/revision":"1"]]]}
I0211 18:02:11.498] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0211 18:02:11.498] has:Error from server (Conflict)
W0211 18:02:15.817] E0211 18:02:15.816870   57378 replica_set.go:450] Sync "namespace-1549908124-32148/nginx-776cc67f78" failed with Operation cannot be fulfilled on replicasets.apps "nginx-776cc67f78": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1549908124-32148/nginx-776cc67f78, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 25034b1f-2e27-11e9-8b3e-0242ac110002, UID in object meta: 
I0211 18:02:16.745] deployment.extensions/nginx configured
W0211 18:02:16.846] I0211 18:02:16.748354   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908124-32148", Name:"nginx", UID:"2ac12d5a-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"746", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7bd4fbc645 to 3
W0211 18:02:16.846] I0211 18:02:16.752189   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908124-32148", Name:"nginx-7bd4fbc645", UID:"2ac1dc00-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"747", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-d49qk
W0211 18:02:16.847] I0211 18:02:16.755788   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908124-32148", Name:"nginx-7bd4fbc645", UID:"2ac1dc00-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"747", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-cd4fz
W0211 18:02:16.847] I0211 18:02:16.756270   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908124-32148", Name:"nginx-7bd4fbc645", UID:"2ac1dc00-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"747", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-bntsf
I0211 18:02:16.947] Successful
... skipping 141 lines ...
I0211 18:02:24.754] +++ [0211 18:02:24] Creating namespace namespace-1549908144-28106
I0211 18:02:24.840] namespace/namespace-1549908144-28106 created
I0211 18:02:24.925] Context "test" modified.
I0211 18:02:24.937] +++ [0211 18:02:24] Testing kubectl get
I0211 18:02:25.048] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:02:25.160] (BSuccessful
I0211 18:02:25.160] message:Error from server (NotFound): pods "abc" not found
I0211 18:02:25.160] has:pods "abc" not found
I0211 18:02:25.275] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:02:25.382] (BSuccessful
I0211 18:02:25.382] message:Error from server (NotFound): pods "abc" not found
I0211 18:02:25.382] has:pods "abc" not found
I0211 18:02:25.486] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:02:25.586] (BSuccessful
I0211 18:02:25.586] message:{
I0211 18:02:25.586]     "apiVersion": "v1",
I0211 18:02:25.586]     "items": [],
... skipping 23 lines ...
I0211 18:02:25.980] has not:No resources found
I0211 18:02:26.078] Successful
I0211 18:02:26.078] message:NAME
I0211 18:02:26.078] has not:No resources found
I0211 18:02:26.190] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:02:26.326] (BSuccessful
I0211 18:02:26.326] message:error: the server doesn't have a resource type "foobar"
I0211 18:02:26.326] has not:No resources found
I0211 18:02:26.422] Successful
I0211 18:02:26.422] message:No resources found.
I0211 18:02:26.422] has:No resources found
I0211 18:02:26.525] Successful
I0211 18:02:26.525] message:
I0211 18:02:26.525] has not:No resources found
I0211 18:02:26.632] Successful
I0211 18:02:26.632] message:No resources found.
I0211 18:02:26.632] has:No resources found
I0211 18:02:26.737] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:02:26.843] (BSuccessful
I0211 18:02:26.843] message:Error from server (NotFound): pods "abc" not found
I0211 18:02:26.844] has:pods "abc" not found
I0211 18:02:26.846] FAIL!
I0211 18:02:26.846] message:Error from server (NotFound): pods "abc" not found
I0211 18:02:26.846] has not:List
I0211 18:02:26.847] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0211 18:02:26.981] Successful
I0211 18:02:26.981] message:I0211 18:02:26.921155   69793 loader.go:359] Config loaded from file /tmp/tmp.IR31pKHqVr/.kube/config
I0211 18:02:26.981] I0211 18:02:26.922919   69793 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0211 18:02:26.982] I0211 18:02:26.948545   69793 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 653 lines ...
I0211 18:02:30.715] }
I0211 18:02:30.832] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 18:02:31.136] (B<no value>Successful
I0211 18:02:31.137] message:valid-pod:
I0211 18:02:31.137] has:valid-pod:
I0211 18:02:31.240] Successful
I0211 18:02:31.240] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0211 18:02:31.240] 	template was:
I0211 18:02:31.240] 		{.missing}
I0211 18:02:31.240] 	object given to jsonpath engine was:
I0211 18:02:31.241] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"creationTimestamp":"2019-02-11T18:02:30Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1549908150-21", "selfLink":"/api/v1/namespaces/namespace-1549908150-21/pods/valid-pod", "uid":"330363bb-2e27-11e9-8b3e-0242ac110002", "resourceVersion":"820"}, "spec":map[string]interface {}{"terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"resources":map[string]interface {}{"requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname"}}, "restartPolicy":"Always"}, "status":map[string]interface {}{"qosClass":"Guaranteed", "phase":"Pending"}}
I0211 18:02:31.241] has:missing is not found
W0211 18:02:31.342] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0211 18:02:31.442] Successful
I0211 18:02:31.443] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0211 18:02:31.443] 	template was:
I0211 18:02:31.443] 		{{.missing}}
I0211 18:02:31.443] 	raw data was:
I0211 18:02:31.444] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-02-11T18:02:30Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1549908150-21","resourceVersion":"820","selfLink":"/api/v1/namespaces/namespace-1549908150-21/pods/valid-pod","uid":"330363bb-2e27-11e9-8b3e-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0211 18:02:31.444] 	object given to template engine was:
I0211 18:02:31.445] 		map[spec:map[enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst] status:map[phase:Pending qosClass:Guaranteed] apiVersion:v1 kind:Pod metadata:map[name:valid-pod namespace:namespace-1549908150-21 resourceVersion:820 selfLink:/api/v1/namespaces/namespace-1549908150-21/pods/valid-pod uid:330363bb-2e27-11e9-8b3e-0242ac110002 creationTimestamp:2019-02-11T18:02:30Z labels:map[name:valid-pod]]]
... skipping 87 lines ...
I0211 18:02:34.788]   terminationGracePeriodSeconds: 30
I0211 18:02:34.788] status:
I0211 18:02:34.788]   phase: Pending
I0211 18:02:34.788]   qosClass: Guaranteed
I0211 18:02:34.788] has:name: valid-pod
I0211 18:02:34.801] Successful
I0211 18:02:34.802] message:Error from server (NotFound): pods "invalid-pod" not found
I0211 18:02:34.802] has:"invalid-pod" not found
I0211 18:02:34.901] pod "valid-pod" deleted
I0211 18:02:35.037] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:02:35.242] (Bpod/redis-master created
I0211 18:02:35.245] pod/valid-pod created
I0211 18:02:35.366] Successful
... skipping 256 lines ...
I0211 18:02:40.469] Running command: run_create_secret_tests
I0211 18:02:40.502] 
I0211 18:02:40.506] +++ Running case: test-cmd.run_create_secret_tests 
I0211 18:02:40.509] +++ working dir: /go/src/k8s.io/kubernetes
I0211 18:02:40.513] +++ command: run_create_secret_tests
I0211 18:02:40.623] Successful
I0211 18:02:40.624] message:Error from server (NotFound): secrets "mysecret" not found
I0211 18:02:40.624] has:secrets "mysecret" not found
I0211 18:02:40.801] Successful
I0211 18:02:40.802] message:Error from server (NotFound): secrets "mysecret" not found
I0211 18:02:40.802] has:secrets "mysecret" not found
I0211 18:02:40.804] Successful
I0211 18:02:40.804] message:user-specified
I0211 18:02:40.804] has:user-specified
I0211 18:02:40.886] Successful
I0211 18:02:40.971] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"39319ddd-2e27-11e9-8b3e-0242ac110002","resourceVersion":"896","creationTimestamp":"2019-02-11T18:02:40Z"}}
... skipping 99 lines ...
I0211 18:02:44.284] has:Timeout exceeded while reading body
I0211 18:02:44.388] Successful
I0211 18:02:44.388] message:NAME        READY   STATUS    RESTARTS   AGE
I0211 18:02:44.388] valid-pod   0/1     Pending   0          2s
I0211 18:02:44.389] has:valid-pod
I0211 18:02:44.477] Successful
I0211 18:02:44.478] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0211 18:02:44.478] has:Invalid timeout value
I0211 18:02:44.569] pod "valid-pod" deleted
I0211 18:02:44.598] +++ exit code: 0
I0211 18:02:44.650] Recording: run_crd_tests
I0211 18:02:44.650] Running command: run_crd_tests
I0211 18:02:44.681] 
... skipping 167 lines ...
I0211 18:02:50.759] foo.company.com/test patched
I0211 18:02:50.892] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0211 18:02:51.031] (Bfoo.company.com/test patched
I0211 18:02:51.203] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0211 18:02:51.354] (Bfoo.company.com/test patched
I0211 18:02:51.531] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0211 18:02:51.773] (B+++ [0211 18:02:51] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0211 18:02:51.843] {
I0211 18:02:51.844]     "apiVersion": "company.com/v1",
I0211 18:02:51.844]     "kind": "Foo",
I0211 18:02:51.844]     "metadata": {
I0211 18:02:51.844]         "annotations": {
I0211 18:02:51.844]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 112 lines ...
I0211 18:02:53.490] has:bar.company.com/test
I0211 18:02:53.576] bar.company.com "test" deleted
W0211 18:02:53.677] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 72959 Killed                  while [ ${tries} -lt 10 ]; do
W0211 18:02:53.677]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0211 18:02:53.677] done
W0211 18:02:53.677] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 72958 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0211 18:03:09.303] E0211 18:03:09.302273   57378 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos"]
W0211 18:03:09.862] I0211 18:03:09.861556   57378 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 18:03:09.864] I0211 18:03:09.863369   54033 clientconn.go:551] parsed scheme: ""
W0211 18:03:09.864] I0211 18:03:09.863404   54033 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 18:03:09.864] I0211 18:03:09.863448   54033 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 18:03:09.864] I0211 18:03:09.863538   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:03:09.865] I0211 18:03:09.864138   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 53 lines ...
I0211 18:03:17.109] crd.sh:437: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test:
I0211 18:03:17.236] (Bcrd.sh:438: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:03:17.471] (Bbar.company.com/test created
I0211 18:03:17.478] foo.company.com/test pruned
I0211 18:03:17.619] Waiting for Get foos {{range.items}}{{.metadata.name}}:{{end}} : expected: , got: test:
I0211 18:03:17.623] 
I0211 18:03:17.631] crd.sh:443: FAIL!
I0211 18:03:17.631] Get foos {{range.items}}{{.metadata.name}}:{{end}}
I0211 18:03:17.631]   Expected: 
I0211 18:03:17.631]   Got:      test:
I0211 18:03:17.631] (B
I0211 18:03:17.631] 51 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I0211 18:03:17.631] (B
I0211 18:03:17.717] +++ exit code: 1
I0211 18:03:17.727] +++ error: 1
I0211 18:03:17.783] Error when running run_crd_tests
I0211 18:03:17.783] Recording: run_cmd_with_img_tests
I0211 18:03:17.783] Running command: run_cmd_with_img_tests
I0211 18:03:17.818] 
I0211 18:03:17.821] +++ Running case: test-cmd.run_cmd_with_img_tests 
I0211 18:03:17.824] +++ working dir: /go/src/k8s.io/kubernetes
I0211 18:03:17.829] +++ command: run_cmd_with_img_tests
... skipping 14 lines ...
W0211 18:03:18.157] I0211 18:03:18.156360   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908197-21360", Name:"test1-848d5d4b47", UID:"4f5b0442-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"993", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-848d5d4b47-c8mmf
I0211 18:03:18.257] Successful
I0211 18:03:18.258] message:deployment.apps/test1 created
I0211 18:03:18.258] has:deployment.apps/test1 created
I0211 18:03:18.268] deployment.extensions "test1" deleted
I0211 18:03:18.389] Successful
I0211 18:03:18.389] message:error: Invalid image name "InvalidImageName": invalid reference format
I0211 18:03:18.389] has:error: Invalid image name "InvalidImageName": invalid reference format
I0211 18:03:18.418] +++ exit code: 0
I0211 18:03:18.480] +++ [0211 18:03:18] Testing recursive resources
I0211 18:03:18.492] +++ [0211 18:03:18] Creating namespace namespace-1549908198-30803
I0211 18:03:18.588] namespace/namespace-1549908198-30803 created
I0211 18:03:18.680] Context "test" modified.
I0211 18:03:18.812] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:03:19.203] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:19.206] (BSuccessful
I0211 18:03:19.207] message:pod/busybox0 created
I0211 18:03:19.207] pod/busybox1 created
I0211 18:03:19.207] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 18:03:19.207] has:error validating data: kind not set
I0211 18:03:19.323] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:19.551] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0211 18:03:19.556] (BSuccessful
I0211 18:03:19.556] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 18:03:19.557] has:Object 'Kind' is missing
I0211 18:03:19.667] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:19.983] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0211 18:03:19.985] (BSuccessful
I0211 18:03:19.986] message:pod/busybox0 replaced
I0211 18:03:19.986] pod/busybox1 replaced
I0211 18:03:19.986] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 18:03:19.986] has:error validating data: kind not set
I0211 18:03:20.096] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:20.202] (BSuccessful
I0211 18:03:20.202] message:Name:               busybox0
I0211 18:03:20.202] Namespace:          namespace-1549908198-30803
I0211 18:03:20.202] Priority:           0
I0211 18:03:20.202] PriorityClassName:  <none>
... skipping 159 lines ...
I0211 18:03:20.219] has:Object 'Kind' is missing
I0211 18:03:20.317] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:20.523] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0211 18:03:20.527] (BSuccessful
I0211 18:03:20.527] message:pod/busybox0 annotated
I0211 18:03:20.527] pod/busybox1 annotated
I0211 18:03:20.528] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 18:03:20.528] has:Object 'Kind' is missing
I0211 18:03:20.639] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:20.958] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0211 18:03:20.961] (BSuccessful
I0211 18:03:20.962] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0211 18:03:20.962] pod/busybox0 configured
I0211 18:03:20.962] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0211 18:03:20.962] pod/busybox1 configured
I0211 18:03:20.962] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 18:03:20.962] has:error validating data: kind not set
I0211 18:03:21.072] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:03:21.277] (Bdeployment.apps/nginx created
W0211 18:03:21.377] I0211 18:03:21.280571   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908198-30803", Name:"nginx", UID:"5137f9c6-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1018", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5f7cff5b56 to 3
W0211 18:03:21.378] I0211 18:03:21.283629   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908198-30803", Name:"nginx-5f7cff5b56", UID:"5138a6ce-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1019", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-g64bc
W0211 18:03:21.378] I0211 18:03:21.285746   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908198-30803", Name:"nginx-5f7cff5b56", UID:"5138a6ce-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1019", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-gsmkx
W0211 18:03:21.379] I0211 18:03:21.286188   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908198-30803", Name:"nginx-5f7cff5b56", UID:"5138a6ce-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1019", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-9rqsv
... skipping 48 lines ...
W0211 18:03:21.901] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0211 18:03:22.001] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:22.116] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:22.119] (BSuccessful
I0211 18:03:22.119] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0211 18:03:22.119] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0211 18:03:22.120] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 18:03:22.120] has:Object 'Kind' is missing
I0211 18:03:22.223] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:22.322] (BSuccessful
I0211 18:03:22.322] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 18:03:22.322] has:busybox0:busybox1:
I0211 18:03:22.324] Successful
I0211 18:03:22.325] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 18:03:22.325] has:Object 'Kind' is missing
I0211 18:03:22.439] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:22.546] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 18:03:22.658] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0211 18:03:22.661] (BSuccessful
I0211 18:03:22.662] message:pod/busybox0 labeled
I0211 18:03:22.662] pod/busybox1 labeled
I0211 18:03:22.662] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 18:03:22.662] has:Object 'Kind' is missing
I0211 18:03:22.769] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:22.867] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 18:03:22.976] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0211 18:03:22.978] (BSuccessful
I0211 18:03:22.979] message:pod/busybox0 patched
I0211 18:03:22.979] pod/busybox1 patched
I0211 18:03:22.979] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 18:03:22.979] has:Object 'Kind' is missing
I0211 18:03:23.087] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:23.304] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:03:23.307] (BSuccessful
I0211 18:03:23.308] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 18:03:23.308] pod "busybox0" force deleted
I0211 18:03:23.308] pod "busybox1" force deleted
I0211 18:03:23.308] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 18:03:23.308] has:Object 'Kind' is missing
I0211 18:03:23.413] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:03:23.621] (Breplicationcontroller/busybox0 created
I0211 18:03:23.625] replicationcontroller/busybox1 created
W0211 18:03:23.726] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 18:03:23.726] I0211 18:03:23.624920   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549908198-30803", Name:"busybox0", UID:"529dc482-2e27-11e9-8b3e-0242ac110002", APIVersion:"v1", ResourceVersion:"1049", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-8svzg
W0211 18:03:23.727] I0211 18:03:23.627887   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549908198-30803", Name:"busybox1", UID:"529e7a87-2e27-11e9-8b3e-0242ac110002", APIVersion:"v1", ResourceVersion:"1051", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-c7cc6
I0211 18:03:23.827] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:23.872] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:23.985] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 18:03:24.095] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 18:03:24.310] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0211 18:03:24.422] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0211 18:03:24.425] (BSuccessful
I0211 18:03:24.426] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0211 18:03:24.426] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0211 18:03:24.426] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 18:03:24.426] has:Object 'Kind' is missing
I0211 18:03:24.516] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0211 18:03:24.619] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0211 18:03:24.743] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:24.851] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 18:03:24.959] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 18:03:25.183] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0211 18:03:25.293] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0211 18:03:25.296] (BSuccessful
I0211 18:03:25.297] message:service/busybox0 exposed
I0211 18:03:25.297] service/busybox1 exposed
I0211 18:03:25.297] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 18:03:25.297] has:Object 'Kind' is missing
I0211 18:03:25.409] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:25.518] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 18:03:25.631] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 18:03:25.865] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0211 18:03:25.974] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0211 18:03:25.977] (BSuccessful
I0211 18:03:25.977] message:replicationcontroller/busybox0 scaled
I0211 18:03:25.977] replicationcontroller/busybox1 scaled
I0211 18:03:25.978] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 18:03:25.978] has:Object 'Kind' is missing
W0211 18:03:26.079] I0211 18:03:25.738594   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549908198-30803", Name:"busybox0", UID:"529dc482-2e27-11e9-8b3e-0242ac110002", APIVersion:"v1", ResourceVersion:"1070", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-kgj56
W0211 18:03:26.079] I0211 18:03:25.749945   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549908198-30803", Name:"busybox1", UID:"529e7a87-2e27-11e9-8b3e-0242ac110002", APIVersion:"v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-9sm2m
I0211 18:03:26.180] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:26.317] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:03:26.321] (BSuccessful
I0211 18:03:26.321] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 18:03:26.321] replicationcontroller "busybox0" force deleted
I0211 18:03:26.322] replicationcontroller "busybox1" force deleted
I0211 18:03:26.322] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 18:03:26.322] has:Object 'Kind' is missing
I0211 18:03:26.438] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:03:26.637] (Bdeployment.apps/nginx1-deployment created
I0211 18:03:26.641] deployment.apps/nginx0-deployment created
W0211 18:03:26.742] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 18:03:26.742] I0211 18:03:26.641342   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908198-30803", Name:"nginx1-deployment", UID:"5469fb01-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1091", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7c76c6cbb8 to 2
W0211 18:03:26.742] I0211 18:03:26.645180   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908198-30803", Name:"nginx0-deployment", UID:"546aa03a-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1092", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-7bb85585d7 to 2
W0211 18:03:26.743] I0211 18:03:26.645281   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908198-30803", Name:"nginx1-deployment-7c76c6cbb8", UID:"546a9ff4-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-qc5px
W0211 18:03:26.743] I0211 18:03:26.648149   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908198-30803", Name:"nginx0-deployment-7bb85585d7", UID:"546b4592-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1097", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-l7mkk
W0211 18:03:26.743] I0211 18:03:26.648797   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908198-30803", Name:"nginx1-deployment-7c76c6cbb8", UID:"546a9ff4-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-gxppg
W0211 18:03:26.743] I0211 18:03:26.651724   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908198-30803", Name:"nginx0-deployment-7bb85585d7", UID:"546b4592-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1097", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-cqr59
I0211 18:03:26.844] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0211 18:03:26.889] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0211 18:03:27.122] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0211 18:03:27.125] (BSuccessful
I0211 18:03:27.125] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0211 18:03:27.126] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0211 18:03:27.126] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 18:03:27.126] has:Object 'Kind' is missing
I0211 18:03:27.226] deployment.apps/nginx1-deployment paused
I0211 18:03:27.230] deployment.apps/nginx0-deployment paused
I0211 18:03:27.358] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0211 18:03:27.361] (BSuccessful
I0211 18:03:27.361] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0211 18:03:27.735] 1         <none>
I0211 18:03:27.735] 
I0211 18:03:27.735] deployment.apps/nginx0-deployment 
I0211 18:03:27.735] REVISION  CHANGE-CAUSE
I0211 18:03:27.735] 1         <none>
I0211 18:03:27.736] 
I0211 18:03:27.736] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 18:03:27.736] has:nginx0-deployment
I0211 18:03:27.738] Successful
I0211 18:03:27.738] message:deployment.apps/nginx1-deployment 
I0211 18:03:27.738] REVISION  CHANGE-CAUSE
I0211 18:03:27.738] 1         <none>
I0211 18:03:27.738] 
I0211 18:03:27.738] deployment.apps/nginx0-deployment 
I0211 18:03:27.739] REVISION  CHANGE-CAUSE
I0211 18:03:27.739] 1         <none>
I0211 18:03:27.739] 
I0211 18:03:27.739] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 18:03:27.739] has:nginx1-deployment
I0211 18:03:27.741] Successful
I0211 18:03:27.741] message:deployment.apps/nginx1-deployment 
I0211 18:03:27.741] REVISION  CHANGE-CAUSE
I0211 18:03:27.741] 1         <none>
I0211 18:03:27.741] 
I0211 18:03:27.741] deployment.apps/nginx0-deployment 
I0211 18:03:27.741] REVISION  CHANGE-CAUSE
I0211 18:03:27.741] 1         <none>
I0211 18:03:27.741] 
I0211 18:03:27.742] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 18:03:27.742] has:Object 'Kind' is missing
I0211 18:03:27.837] deployment.apps "nginx1-deployment" force deleted
I0211 18:03:27.843] deployment.apps "nginx0-deployment" force deleted
W0211 18:03:27.943] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 18:03:27.944] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 18:03:28.966] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:03:29.174] (Breplicationcontroller/busybox0 created
I0211 18:03:29.178] replicationcontroller/busybox1 created
W0211 18:03:29.278] I0211 18:03:29.177514   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549908198-30803", Name:"busybox0", UID:"55ed1896-2e27-11e9-8b3e-0242ac110002", APIVersion:"v1", ResourceVersion:"1140", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-qcb4z
W0211 18:03:29.279] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 18:03:29.279] I0211 18:03:29.180977   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549908198-30803", Name:"busybox1", UID:"55edcc9c-2e27-11e9-8b3e-0242ac110002", APIVersion:"v1", ResourceVersion:"1142", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-njhtt
I0211 18:03:29.380] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 18:03:29.428] (BSuccessful
I0211 18:03:29.428] message:no rollbacker has been implemented for "ReplicationController"
I0211 18:03:29.429] no rollbacker has been implemented for "ReplicationController"
I0211 18:03:29.429] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
I0211 18:03:29.432] message:no rollbacker has been implemented for "ReplicationController"
I0211 18:03:29.432] no rollbacker has been implemented for "ReplicationController"
I0211 18:03:29.432] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 18:03:29.432] has:Object 'Kind' is missing
I0211 18:03:29.543] Successful
I0211 18:03:29.544] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 18:03:29.544] error: replicationcontrollers "busybox0" pausing is not supported
I0211 18:03:29.544] error: replicationcontrollers "busybox1" pausing is not supported
I0211 18:03:29.544] has:Object 'Kind' is missing
I0211 18:03:29.546] Successful
I0211 18:03:29.547] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 18:03:29.547] error: replicationcontrollers "busybox0" pausing is not supported
I0211 18:03:29.547] error: replicationcontrollers "busybox1" pausing is not supported
I0211 18:03:29.547] has:replicationcontrollers "busybox0" pausing is not supported
I0211 18:03:29.550] Successful
I0211 18:03:29.550] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 18:03:29.551] error: replicationcontrollers "busybox0" pausing is not supported
I0211 18:03:29.551] error: replicationcontrollers "busybox1" pausing is not supported
I0211 18:03:29.551] has:replicationcontrollers "busybox1" pausing is not supported
I0211 18:03:29.658] Successful
I0211 18:03:29.658] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 18:03:29.659] error: replicationcontrollers "busybox0" resuming is not supported
I0211 18:03:29.659] error: replicationcontrollers "busybox1" resuming is not supported
I0211 18:03:29.659] has:Object 'Kind' is missing
I0211 18:03:29.661] Successful
I0211 18:03:29.662] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 18:03:29.662] error: replicationcontrollers "busybox0" resuming is not supported
I0211 18:03:29.662] error: replicationcontrollers "busybox1" resuming is not supported
I0211 18:03:29.662] has:replicationcontrollers "busybox0" resuming is not supported
I0211 18:03:29.665] Successful
I0211 18:03:29.665] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 18:03:29.665] error: replicationcontrollers "busybox0" resuming is not supported
I0211 18:03:29.665] error: replicationcontrollers "busybox1" resuming is not supported
I0211 18:03:29.666] has:replicationcontrollers "busybox0" resuming is not supported
I0211 18:03:29.756] replicationcontroller "busybox0" force deleted
I0211 18:03:29.760] replicationcontroller "busybox1" force deleted
W0211 18:03:29.861] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 18:03:29.861] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 18:03:30.772] Recording: run_namespace_tests
I0211 18:03:30.772] Running command: run_namespace_tests
I0211 18:03:30.801] 
I0211 18:03:30.805] +++ Running case: test-cmd.run_namespace_tests 
I0211 18:03:30.808] +++ working dir: /go/src/k8s.io/kubernetes
I0211 18:03:30.811] +++ command: run_namespace_tests
I0211 18:03:30.827] +++ [0211 18:03:30] Testing kubectl(v1:namespaces)
I0211 18:03:30.913] namespace/my-namespace created
I0211 18:03:31.028] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0211 18:03:31.116] (Bnamespace "my-namespace" deleted
I0211 18:03:36.250] namespace/my-namespace condition met
I0211 18:03:36.351] Successful
I0211 18:03:36.351] message:Error from server (NotFound): namespaces "my-namespace" not found
I0211 18:03:36.351] has: not found
I0211 18:03:36.474] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0211 18:03:36.564] (Bnamespace/other created
I0211 18:03:36.685] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0211 18:03:36.798] (Bcore.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:03:36.999] (Bpod/valid-pod created
I0211 18:03:37.122] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 18:03:37.233] (Bcore.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 18:03:37.332] (BSuccessful
I0211 18:03:37.333] message:error: a resource cannot be retrieved by name across all namespaces
I0211 18:03:37.333] has:a resource cannot be retrieved by name across all namespaces
I0211 18:03:37.445] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 18:03:37.540] (Bpod "valid-pod" force deleted
W0211 18:03:37.642] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 18:03:37.743] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:03:37.758] (Bnamespace "other" deleted
... skipping 115 lines ...
I0211 18:03:59.441] +++ command: run_client_config_tests
I0211 18:03:59.459] +++ [0211 18:03:59] Creating namespace namespace-1549908239-22049
I0211 18:03:59.537] namespace/namespace-1549908239-22049 created
I0211 18:03:59.615] Context "test" modified.
I0211 18:03:59.626] +++ [0211 18:03:59] Testing client config
I0211 18:03:59.712] Successful
I0211 18:03:59.712] message:error: stat missing: no such file or directory
I0211 18:03:59.712] has:missing: no such file or directory
I0211 18:03:59.795] Successful
I0211 18:03:59.796] message:error: stat missing: no such file or directory
I0211 18:03:59.796] has:missing: no such file or directory
I0211 18:03:59.881] Successful
I0211 18:03:59.881] message:error: stat missing: no such file or directory
I0211 18:03:59.881] has:missing: no such file or directory
I0211 18:03:59.968] Successful
I0211 18:03:59.968] message:Error in configuration: context was not found for specified context: missing-context
I0211 18:03:59.969] has:context was not found for specified context: missing-context
I0211 18:04:00.057] Successful
I0211 18:04:00.057] message:error: no server found for cluster "missing-cluster"
I0211 18:04:00.057] has:no server found for cluster "missing-cluster"
I0211 18:04:00.154] Successful
I0211 18:04:00.154] message:error: auth info "missing-user" does not exist
I0211 18:04:00.154] has:auth info "missing-user" does not exist
I0211 18:04:00.323] Successful
I0211 18:04:00.324] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0211 18:04:00.324] has:Error loading config file
I0211 18:04:00.407] Successful
I0211 18:04:00.407] message:error: stat missing-config: no such file or directory
I0211 18:04:00.408] has:no such file or directory
I0211 18:04:00.429] +++ exit code: 0
I0211 18:04:00.483] Recording: run_service_accounts_tests
I0211 18:04:00.483] Running command: run_service_accounts_tests
I0211 18:04:00.515] 
I0211 18:04:00.517] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 35 lines ...
I0211 18:04:07.576] Labels:                        run=pi
I0211 18:04:07.576] Annotations:                   <none>
I0211 18:04:07.576] Schedule:                      59 23 31 2 *
I0211 18:04:07.576] Concurrency Policy:            Allow
I0211 18:04:07.576] Suspend:                       False
I0211 18:04:07.576] Successful Job History Limit:  824639368760
I0211 18:04:07.576] Failed Job History Limit:      1
I0211 18:04:07.576] Starting Deadline Seconds:     <unset>
I0211 18:04:07.577] Selector:                      <unset>
I0211 18:04:07.577] Parallelism:                   <unset>
I0211 18:04:07.577] Completions:                   <unset>
I0211 18:04:07.577] Pod Template:
I0211 18:04:07.577]   Labels:  run=pi
... skipping 34 lines ...
I0211 18:04:08.211]                 job-name=test-job
I0211 18:04:08.211]                 run=pi
I0211 18:04:08.211] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0211 18:04:08.211] Parallelism:    1
I0211 18:04:08.212] Completions:    1
I0211 18:04:08.212] Start Time:     Mon, 11 Feb 2019 18:04:07 +0000
I0211 18:04:08.212] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0211 18:04:08.212] Pod Template:
I0211 18:04:08.212]   Labels:  controller-uid=6cfefe90-2e27-11e9-8b3e-0242ac110002
I0211 18:04:08.212]            job-name=test-job
I0211 18:04:08.212]            run=pi
I0211 18:04:08.212]   Containers:
I0211 18:04:08.212]    pi:
... skipping 328 lines ...
I0211 18:04:18.720]     role: padawan
I0211 18:04:18.720]   sessionAffinity: None
I0211 18:04:18.720]   type: ClusterIP
I0211 18:04:18.720] status:
I0211 18:04:18.720]   loadBalancer: {}
W0211 18:04:18.821] I0211 18:04:18.643945   57378 namespace_controller.go:171] Namespace has been deleted test-jobs
W0211 18:04:18.821] error: you must specify resources by --filename when --local is set.
W0211 18:04:18.821] Example resource specifications include:
W0211 18:04:18.821]    '-f rsrc.yaml'
W0211 18:04:18.821]    '--filename=rsrc.json'
I0211 18:04:18.922] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0211 18:04:19.122] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0211 18:04:19.223] (Bservice "redis-master" deleted
... skipping 93 lines ...
I0211 18:04:26.388] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 18:04:26.494] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0211 18:04:26.612] (Bdaemonset.extensions/bind rolled back
I0211 18:04:26.731] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 18:04:26.831] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 18:04:26.945] (BSuccessful
I0211 18:04:26.946] message:error: unable to find specified revision 1000000 in history
I0211 18:04:26.946] has:unable to find specified revision
I0211 18:04:27.056] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 18:04:27.153] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 18:04:27.263] (Bdaemonset.extensions/bind rolled back
I0211 18:04:27.382] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0211 18:04:27.499] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 28 lines ...
I0211 18:04:29.176] Namespace:    namespace-1549908267-9513
I0211 18:04:29.176] Selector:     app=guestbook,tier=frontend
I0211 18:04:29.176] Labels:       app=guestbook
I0211 18:04:29.176]               tier=frontend
I0211 18:04:29.177] Annotations:  <none>
I0211 18:04:29.177] Replicas:     3 current / 3 desired
I0211 18:04:29.177] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:04:29.177] Pod Template:
I0211 18:04:29.177]   Labels:  app=guestbook
I0211 18:04:29.177]            tier=frontend
I0211 18:04:29.177]   Containers:
I0211 18:04:29.177]    php-redis:
I0211 18:04:29.177]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 18:04:29.311] Namespace:    namespace-1549908267-9513
I0211 18:04:29.312] Selector:     app=guestbook,tier=frontend
I0211 18:04:29.312] Labels:       app=guestbook
I0211 18:04:29.312]               tier=frontend
I0211 18:04:29.312] Annotations:  <none>
I0211 18:04:29.312] Replicas:     3 current / 3 desired
I0211 18:04:29.312] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:04:29.312] Pod Template:
I0211 18:04:29.312]   Labels:  app=guestbook
I0211 18:04:29.312]            tier=frontend
I0211 18:04:29.312]   Containers:
I0211 18:04:29.312]    php-redis:
I0211 18:04:29.312]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0211 18:04:29.448] Namespace:    namespace-1549908267-9513
I0211 18:04:29.448] Selector:     app=guestbook,tier=frontend
I0211 18:04:29.448] Labels:       app=guestbook
I0211 18:04:29.448]               tier=frontend
I0211 18:04:29.448] Annotations:  <none>
I0211 18:04:29.448] Replicas:     3 current / 3 desired
I0211 18:04:29.448] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:04:29.449] Pod Template:
I0211 18:04:29.449]   Labels:  app=guestbook
I0211 18:04:29.449]            tier=frontend
I0211 18:04:29.449]   Containers:
I0211 18:04:29.449]    php-redis:
I0211 18:04:29.449]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0211 18:04:29.585] Namespace:    namespace-1549908267-9513
I0211 18:04:29.585] Selector:     app=guestbook,tier=frontend
I0211 18:04:29.585] Labels:       app=guestbook
I0211 18:04:29.586]               tier=frontend
I0211 18:04:29.586] Annotations:  <none>
I0211 18:04:29.586] Replicas:     3 current / 3 desired
I0211 18:04:29.586] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:04:29.586] Pod Template:
I0211 18:04:29.586]   Labels:  app=guestbook
I0211 18:04:29.586]            tier=frontend
I0211 18:04:29.586]   Containers:
I0211 18:04:29.586]    php-redis:
I0211 18:04:29.587]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0211 18:04:29.768] Namespace:    namespace-1549908267-9513
I0211 18:04:29.768] Selector:     app=guestbook,tier=frontend
I0211 18:04:29.768] Labels:       app=guestbook
I0211 18:04:29.768]               tier=frontend
I0211 18:04:29.768] Annotations:  <none>
I0211 18:04:29.769] Replicas:     3 current / 3 desired
I0211 18:04:29.769] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:04:29.769] Pod Template:
I0211 18:04:29.769]   Labels:  app=guestbook
I0211 18:04:29.769]            tier=frontend
I0211 18:04:29.769]   Containers:
I0211 18:04:29.769]    php-redis:
I0211 18:04:29.769]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 18:04:29.893] Namespace:    namespace-1549908267-9513
I0211 18:04:29.893] Selector:     app=guestbook,tier=frontend
I0211 18:04:29.893] Labels:       app=guestbook
I0211 18:04:29.893]               tier=frontend
I0211 18:04:29.893] Annotations:  <none>
I0211 18:04:29.893] Replicas:     3 current / 3 desired
I0211 18:04:29.894] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:04:29.894] Pod Template:
I0211 18:04:29.894]   Labels:  app=guestbook
I0211 18:04:29.894]            tier=frontend
I0211 18:04:29.894]   Containers:
I0211 18:04:29.894]    php-redis:
I0211 18:04:29.894]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 18:04:30.007] Namespace:    namespace-1549908267-9513
I0211 18:04:30.008] Selector:     app=guestbook,tier=frontend
I0211 18:04:30.008] Labels:       app=guestbook
I0211 18:04:30.008]               tier=frontend
I0211 18:04:30.008] Annotations:  <none>
I0211 18:04:30.008] Replicas:     3 current / 3 desired
I0211 18:04:30.008] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:04:30.008] Pod Template:
I0211 18:04:30.008]   Labels:  app=guestbook
I0211 18:04:30.008]            tier=frontend
I0211 18:04:30.009]   Containers:
I0211 18:04:30.009]    php-redis:
I0211 18:04:30.009]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0211 18:04:30.141] Namespace:    namespace-1549908267-9513
I0211 18:04:30.141] Selector:     app=guestbook,tier=frontend
I0211 18:04:30.141] Labels:       app=guestbook
I0211 18:04:30.141]               tier=frontend
I0211 18:04:30.141] Annotations:  <none>
I0211 18:04:30.141] Replicas:     3 current / 3 desired
I0211 18:04:30.142] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:04:30.142] Pod Template:
I0211 18:04:30.142]   Labels:  app=guestbook
I0211 18:04:30.142]            tier=frontend
I0211 18:04:30.142]   Containers:
I0211 18:04:30.142]    php-redis:
I0211 18:04:30.143]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
W0211 18:04:30.465] I0211 18:04:30.369119   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549908267-9513", Name:"frontend", UID:"7981f398-2e27-11e9-8b3e-0242ac110002", APIVersion:"v1", ResourceVersion:"1400", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-hwf29
I0211 18:04:30.565] core.sh:1045: Successful get rc frontend {{.spec.replicas}}: 2
I0211 18:04:30.599] (Bcore.sh:1049: Successful get rc frontend {{.spec.replicas}}: 2
I0211 18:04:30.822] (Bcore.sh:1053: Successful get rc frontend {{.spec.replicas}}: 2
I0211 18:04:30.926] (Bcore.sh:1057: Successful get rc frontend {{.spec.replicas}}: 2
I0211 18:04:31.033] (Breplicationcontroller/frontend scaled
W0211 18:04:31.134] error: Expected replicas to be 3, was 2
W0211 18:04:31.134] I0211 18:04:31.036904   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549908267-9513", Name:"frontend", UID:"7981f398-2e27-11e9-8b3e-0242ac110002", APIVersion:"v1", ResourceVersion:"1407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7s47p
I0211 18:04:31.235] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0211 18:04:31.265] (Bcore.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I0211 18:04:31.367] (Breplicationcontroller/frontend scaled
W0211 18:04:31.468] I0211 18:04:31.374123   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549908267-9513", Name:"frontend", UID:"7981f398-2e27-11e9-8b3e-0242ac110002", APIVersion:"v1", ResourceVersion:"1412", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-7s47p
I0211 18:04:31.569] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
... skipping 41 lines ...
I0211 18:04:33.968] service "expose-test-deployment" deleted
I0211 18:04:34.092] Successful
I0211 18:04:34.092] message:service/expose-test-deployment exposed
I0211 18:04:34.092] has:service/expose-test-deployment exposed
I0211 18:04:34.184] service "expose-test-deployment" deleted
I0211 18:04:34.291] Successful
I0211 18:04:34.292] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0211 18:04:34.292] See 'kubectl expose -h' for help and examples
I0211 18:04:34.292] has:invalid deployment: no selectors
I0211 18:04:34.394] Successful
I0211 18:04:34.394] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0211 18:04:34.394] See 'kubectl expose -h' for help and examples
I0211 18:04:34.395] has:invalid deployment: no selectors
I0211 18:04:34.590] deployment.apps/nginx-deployment created
W0211 18:04:34.691] I0211 18:04:34.593262   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908267-9513", Name:"nginx-deployment", UID:"7ceaa0b8-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1529", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-64bb598779 to 3
W0211 18:04:34.691] I0211 18:04:34.596054   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908267-9513", Name:"nginx-deployment-64bb598779", UID:"7ceb3ffc-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1530", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-dnhg5
W0211 18:04:34.691] I0211 18:04:34.598715   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908267-9513", Name:"nginx-deployment-64bb598779", UID:"7ceb3ffc-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1530", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-599pk
... skipping 23 lines ...
I0211 18:04:36.834] service "frontend" deleted
I0211 18:04:36.841] service "frontend-2" deleted
I0211 18:04:36.848] service "frontend-3" deleted
I0211 18:04:36.856] service "frontend-4" deleted
I0211 18:04:36.863] service "frontend-5" deleted
I0211 18:04:36.977] Successful
I0211 18:04:36.977] message:error: cannot expose a Node
I0211 18:04:36.977] has:cannot expose
I0211 18:04:37.082] Successful
I0211 18:04:37.083] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0211 18:04:37.083] has:metadata.name: Invalid value
I0211 18:04:37.196] Successful
I0211 18:04:37.197] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0211 18:04:39.596] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0211 18:04:39.715] core.sh:1233: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0211 18:04:39.802] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0211 18:04:39.916] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0211 18:04:40.042] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0211 18:04:40.135] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0211 18:04:40.236] Error: required flag(s) "max" not set
W0211 18:04:40.236] 
W0211 18:04:40.236] 
W0211 18:04:40.236] Examples:
W0211 18:04:40.236]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0211 18:04:40.237]   kubectl autoscale deployment foo --min=2 --max=10
W0211 18:04:40.237]   
... skipping 54 lines ...
I0211 18:04:40.548]           limits:
I0211 18:04:40.548]             cpu: 300m
I0211 18:04:40.548]           requests:
I0211 18:04:40.548]             cpu: 300m
I0211 18:04:40.549]       terminationGracePeriodSeconds: 0
I0211 18:04:40.549] status: {}
W0211 18:04:40.649] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0211 18:04:40.857] deployment.apps/nginx-deployment-resources created
W0211 18:04:40.957] I0211 18:04:40.860722   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908267-9513", Name:"nginx-deployment-resources", UID:"80a6f2f4-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1669", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-695c766d58 to 3
W0211 18:04:40.958] I0211 18:04:40.864139   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908267-9513", Name:"nginx-deployment-resources-695c766d58", UID:"80a791a8-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1670", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-8mjzk
W0211 18:04:40.959] I0211 18:04:40.867117   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908267-9513", Name:"nginx-deployment-resources-695c766d58", UID:"80a791a8-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1670", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-4lwc4
W0211 18:04:40.959] I0211 18:04:40.867601   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908267-9513", Name:"nginx-deployment-resources-695c766d58", UID:"80a791a8-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1670", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-km9hr
I0211 18:04:41.059] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 2 lines ...
I0211 18:04:41.324] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0211 18:04:41.425] I0211 18:04:41.326344   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908267-9513", Name:"nginx-deployment-resources", UID:"80a6f2f4-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1684", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5b7fc6dd8b to 1
W0211 18:04:41.425] I0211 18:04:41.329967   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908267-9513", Name:"nginx-deployment-resources-5b7fc6dd8b", UID:"80eea03c-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1685", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5b7fc6dd8b-xcz7d
I0211 18:04:41.526] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0211 18:04:41.558] (Bcore.sh:1258: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0211 18:04:41.765] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0211 18:04:41.866] error: unable to find container named redis
W0211 18:04:41.866] I0211 18:04:41.775856   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908267-9513", Name:"nginx-deployment-resources", UID:"80a6f2f4-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1694", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-695c766d58 to 2
W0211 18:04:41.866] I0211 18:04:41.780813   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908267-9513", Name:"nginx-deployment-resources-695c766d58", UID:"80a791a8-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1698", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-695c766d58-8mjzk
W0211 18:04:41.867] I0211 18:04:41.781379   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908267-9513", Name:"nginx-deployment-resources", UID:"80a6f2f4-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1697", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6bc4567bf6 to 1
W0211 18:04:41.867] I0211 18:04:41.785042   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908267-9513", Name:"nginx-deployment-resources-6bc4567bf6", UID:"8132357b-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1702", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6bc4567bf6-l9cwx
I0211 18:04:41.967] core.sh:1263: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0211 18:04:41.997] (Bcore.sh:1264: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
... skipping 79 lines ...
I0211 18:04:42.571]     status: "True"
I0211 18:04:42.571]     type: Progressing
I0211 18:04:42.571]   observedGeneration: 4
I0211 18:04:42.571]   replicas: 4
I0211 18:04:42.571]   unavailableReplicas: 4
I0211 18:04:42.571]   updatedReplicas: 1
W0211 18:04:42.672] error: you must specify resources by --filename when --local is set.
W0211 18:04:42.672] Example resource specifications include:
W0211 18:04:42.672]    '-f rsrc.yaml'
W0211 18:04:42.672]    '--filename=rsrc.json'
I0211 18:04:42.773] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0211 18:04:42.868] (Bcore.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0211 18:04:42.985] (Bcore.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0211 18:04:44.720]                 pod-template-hash=7875bf5c8b
I0211 18:04:44.720] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0211 18:04:44.720]                 deployment.kubernetes.io/max-replicas: 2
I0211 18:04:44.721]                 deployment.kubernetes.io/revision: 1
I0211 18:04:44.721] Controlled By:  Deployment/test-nginx-apps
I0211 18:04:44.721] Replicas:       1 current / 1 desired
I0211 18:04:44.721] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 18:04:44.721] Pod Template:
I0211 18:04:44.721]   Labels:  app=test-nginx-apps
I0211 18:04:44.721]            pod-template-hash=7875bf5c8b
I0211 18:04:44.721]   Containers:
I0211 18:04:44.721]    nginx:
I0211 18:04:44.722]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
I0211 18:04:49.633] (B    Image:	k8s.gcr.io/nginx:test-cmd
I0211 18:04:49.743] apps.sh:296: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0211 18:04:49.852] (Bdeployment.extensions/nginx rolled back
I0211 18:04:50.962] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 18:04:51.179] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 18:04:51.300] (Bdeployment.extensions/nginx rolled back
W0211 18:04:51.400] error: unable to find specified revision 1000000 in history
I0211 18:04:52.418] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0211 18:04:52.525] (Bdeployment.extensions/nginx paused
W0211 18:04:52.649] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0211 18:04:52.760] deployment.extensions/nginx resumed
I0211 18:04:52.884] deployment.extensions/nginx rolled back
I0211 18:04:53.090]     deployment.kubernetes.io/revision-history: 1,3
W0211 18:04:53.284] error: desired revision (3) is different from the running revision (5)
I0211 18:04:53.480] deployment.apps/nginx2 created
W0211 18:04:53.580] I0211 18:04:53.483646   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908283-30081", Name:"nginx2", UID:"882d16d7-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1918", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-78cb9c866 to 3
W0211 18:04:53.581] I0211 18:04:53.486986   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908283-30081", Name:"nginx2-78cb9c866", UID:"882db4f0-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1919", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-78cb9c866-4n98l
W0211 18:04:53.581] I0211 18:04:53.489280   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908283-30081", Name:"nginx2-78cb9c866", UID:"882db4f0-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1919", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-78cb9c866-dhksx
W0211 18:04:53.582] I0211 18:04:53.489361   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908283-30081", Name:"nginx2-78cb9c866", UID:"882db4f0-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1919", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-78cb9c866-75p7m
I0211 18:04:53.682] deployment.extensions "nginx2" deleted
... skipping 11 lines ...
W0211 18:04:54.538] I0211 18:04:54.440809   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908283-30081", Name:"nginx-deployment", UID:"88797891-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1966", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5bfd55c857 to 1
W0211 18:04:54.538] I0211 18:04:54.444655   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908283-30081", Name:"nginx-deployment-5bfd55c857", UID:"88bfba8a-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1967", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5bfd55c857-fzspn
W0211 18:04:54.596] I0211 18:04:54.595678   57378 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1549908267-9513
I0211 18:04:54.697] apps.sh:337: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0211 18:04:54.697] (Bapps.sh:338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0211 18:04:54.874] (Bdeployment.extensions/nginx-deployment image updated
W0211 18:04:54.975] error: unable to find container named "redis"
I0211 18:04:55.076] apps.sh:343: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 18:04:55.103] (Bapps.sh:344: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0211 18:04:55.197] (Bdeployment.apps/nginx-deployment image updated
I0211 18:04:55.326] apps.sh:347: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0211 18:04:55.440] (Bapps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0211 18:04:55.639] (Bapps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
... skipping 48 lines ...
I0211 18:04:58.866] deployment.extensions/nginx-deployment env updated
I0211 18:04:58.902] deployment.extensions/nginx-deployment env updated
I0211 18:04:59.002] deployment.extensions "nginx-deployment" deleted
W0211 18:04:59.102] I0211 18:04:58.948416   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908283-30081", Name:"nginx-deployment", UID:"8a2e804b-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2120", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-54979c5b5c to 0
W0211 18:04:59.103] I0211 18:04:59.097964   57378 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549908283-30081", Name:"nginx-deployment", UID:"8a2e804b-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2125", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5cc58864fb to 1
W0211 18:04:59.122] I0211 18:04:59.121188   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908283-30081", Name:"nginx-deployment-54979c5b5c", UID:"8af54bf2-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2127", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-54979c5b5c-79wqm
W0211 18:04:59.171] E0211 18:04:59.171072   57378 replica_set.go:450] Sync "namespace-1549908283-30081/nginx-deployment-687fbc687d" failed with replicasets.apps "nginx-deployment-687fbc687d" not found
W0211 18:04:59.221] E0211 18:04:59.220712   57378 replica_set.go:450] Sync "namespace-1549908283-30081/nginx-deployment-58dbcd7c7f" failed with replicasets.apps "nginx-deployment-58dbcd7c7f" not found
W0211 18:04:59.273] I0211 18:04:59.272773   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908283-30081", Name:"nginx-deployment-5cc58864fb", UID:"8b864f74-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2146", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5cc58864fb-zpkn6
I0211 18:04:59.374] configmap "test-set-env-config" deleted
I0211 18:04:59.374] secret "test-set-env-secret" deleted
I0211 18:04:59.374] +++ exit code: 0
I0211 18:04:59.374] Recording: run_rs_tests
I0211 18:04:59.375] Running command: run_rs_tests
... skipping 2 lines ...
I0211 18:04:59.375] +++ working dir: /go/src/k8s.io/kubernetes
I0211 18:04:59.375] +++ command: run_rs_tests
I0211 18:04:59.375] +++ [0211 18:04:59] Creating namespace namespace-1549908299-23699
I0211 18:04:59.426] namespace/namespace-1549908299-23699 created
I0211 18:04:59.516] Context "test" modified.
I0211 18:04:59.527] +++ [0211 18:04:59] Testing kubectl(v1:replicasets)
W0211 18:04:59.628] E0211 18:04:59.520858   57378 replica_set.go:450] Sync "namespace-1549908283-30081/nginx-deployment-54979c5b5c" failed with replicasets.apps "nginx-deployment-54979c5b5c" not found
W0211 18:04:59.629] E0211 18:04:59.571118   57378 replica_set.go:450] Sync "namespace-1549908283-30081/nginx-deployment-5cc58864fb" failed with replicasets.apps "nginx-deployment-5cc58864fb" not found
I0211 18:04:59.729] apps.sh:502: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:04:59.862] (Breplicaset.apps/frontend created
I0211 18:04:59.883] +++ [0211 18:04:59] Deleting rs
I0211 18:04:59.972] replicaset.extensions "frontend" deleted
W0211 18:05:00.073] I0211 18:04:59.868828   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908299-23699", Name:"frontend", UID:"8bfaf76e-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2158", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rgl65
W0211 18:05:00.074] I0211 18:04:59.871685   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908299-23699", Name:"frontend", UID:"8bfaf76e-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2158", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-w7zkz
W0211 18:05:00.074] I0211 18:04:59.871767   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908299-23699", Name:"frontend", UID:"8bfaf76e-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2158", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fzmwt
W0211 18:05:00.075] E0211 18:05:00.020734   57378 replica_set.go:450] Sync "namespace-1549908299-23699/frontend" failed with replicasets.apps "frontend" not found
I0211 18:05:00.176] apps.sh:508: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:05:00.203] (Bapps.sh:512: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:05:00.399] (Breplicaset.apps/frontend-no-cascade created
W0211 18:05:00.500] I0211 18:05:00.402685   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908299-23699", Name:"frontend-no-cascade", UID:"8c4cd1b6-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2174", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-wfv5s
W0211 18:05:00.500] I0211 18:05:00.405535   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908299-23699", Name:"frontend-no-cascade", UID:"8c4cd1b6-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2174", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-v6gp9
W0211 18:05:00.501] I0211 18:05:00.406315   57378 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549908299-23699", Name:"frontend-no-cascade", UID:"8c4cd1b6-2e27-11e9-8b3e-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2174", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-kb2zv
I0211 18:05:00.601] apps.sh:518: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0211 18:05:00.602] (B+++ [0211 18:05:00] Deleting rs
I0211 18:05:00.640] replicaset.extensions "frontend-no-cascade" deleted
W0211 18:05:00.741] E0211 18:05:00.655853   57378 replica_set.go:450] Sync "namespace-1549908299-23699/frontend-no-cascade" failed with Operation cannot be fulfilled on replicasets.apps "frontend-no-cascade": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1549908299-23699/frontend-no-cascade, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8c4cd1b6-2e27-11e9-8b3e-0242ac110002, UID in object meta: 
W0211 18:05:00.741] E0211 18:05:00.720720   57378 replica_set.go:450] Sync "namespace-1549908299-23699/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
I0211 18:05:00.842] apps.sh:522: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:05:00.877] (Bapps.sh:524: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0211 18:05:00.972] (Bpod "frontend-no-cascade-kb2zv" deleted
I0211 18:05:00.978] pod "frontend-no-cascade-v6gp9" deleted
I0211 18:05:00.984] pod "frontend-no-cascade-wfv5s" deleted
I0211 18:05:01.104] apps.sh:527: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 8 lines ...
I0211 18:05:01.709] Namespace:    namespace-1549908299-23699
I0211 18:05:01.709] Selector:     app=guestbook,tier=frontend
I0211 18:05:01.709] Labels:       app=guestbook
I0211 18:05:01.709]               tier=frontend
I0211 18:05:01.710] Annotations:  <none>
I0211 18:05:01.710] Replicas:     3 current / 3 desired
I0211 18:05:01.710] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:05:01.710] Pod Template:
I0211 18:05:01.710]   Labels:  app=guestbook
I0211 18:05:01.710]            tier=frontend
I0211 18:05:01.710]   Containers:
I0211 18:05:01.710]    php-redis:
I0211 18:05:01.710]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0211 18:05:01.848] Namespace:    namespace-1549908299-23699
I0211 18:05:01.848] Selector:     app=guestbook,tier=frontend
I0211 18:05:01.848] Labels:       app=guestbook
I0211 18:05:01.848]               tier=frontend
I0211 18:05:01.848] Annotations:  <none>
I0211 18:05:01.848] Replicas:     3 current / 3 desired
I0211 18:05:01.848] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:05:01.848] Pod Template:
I0211 18:05:01.848]   Labels:  app=guestbook
I0211 18:05:01.848]            tier=frontend
I0211 18:05:01.849]   Containers:
I0211 18:05:01.849]    php-redis:
I0211 18:05:01.849]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0211 18:05:01.977] Namespace:    namespace-1549908299-23699
I0211 18:05:01.977] Selector:     app=guestbook,tier=frontend
I0211 18:05:01.977] Labels:       app=guestbook
I0211 18:05:01.977]               tier=frontend
I0211 18:05:01.978] Annotations:  <none>
I0211 18:05:01.978] Replicas:     3 current / 3 desired
I0211 18:05:01.978] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:05:01.978] Pod Template:
I0211 18:05:01.978]   Labels:  app=guestbook
I0211 18:05:01.978]            tier=frontend
I0211 18:05:01.978]   Containers:
I0211 18:05:01.978]    php-redis:
I0211 18:05:01.978]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0211 18:05:02.117] Namespace:    namespace-1549908299-23699
I0211 18:05:02.117] Selector:     app=guestbook,tier=frontend
I0211 18:05:02.117] Labels:       app=guestbook
I0211 18:05:02.118]               tier=frontend
I0211 18:05:02.118] Annotations:  <none>
I0211 18:05:02.118] Replicas:     3 current / 3 desired
I0211 18:05:02.118] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:05:02.118] Pod Template:
I0211 18:05:02.118]   Labels:  app=guestbook
I0211 18:05:02.118]            tier=frontend
I0211 18:05:02.118]   Containers:
I0211 18:05:02.118]    php-redis:
I0211 18:05:02.118]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0211 18:05:02.293] Namespace:    namespace-1549908299-23699
I0211 18:05:02.293] Selector:     app=guestbook,tier=frontend
I0211 18:05:02.293] Labels:       app=guestbook
I0211 18:05:02.293]               tier=frontend
I0211 18:05:02.294] Annotations:  <none>
I0211 18:05:02.294] Replicas:     3 current / 3 desired
I0211 18:05:02.294] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:05:02.294] Pod Template:
I0211 18:05:02.294]   Labels:  app=guestbook
I0211 18:05:02.294]            tier=frontend
I0211 18:05:02.294]   Containers:
I0211 18:05:02.294]    php-redis:
I0211 18:05:02.294]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0211 18:05:02.428] Namespace:    namespace-1549908299-23699
I0211 18:05:02.428] Selector:     app=guestbook,tier=frontend
I0211 18:05:02.428] Labels:       app=guestbook
I0211 18:05:02.429]               tier=frontend
I0211 18:05:02.429] Annotations:  <none>
I0211 18:05:02.429] Replicas:     3 current / 3 desired
I0211 18:05:02.429] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:05:02.429] Pod Template:
I0211 18:05:02.429]   Labels:  app=guestbook
I0211 18:05:02.429]            tier=frontend
I0211 18:05:02.429]   Containers:
I0211 18:05:02.429]    php-redis:
I0211 18:05:02.429]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0211 18:05:02.557] Namespace:    namespace-1549908299-23699
I0211 18:05:02.557] Selector:     app=guestbook,tier=frontend
I0211 18:05:02.557] Labels:       app=guestbook
I0211 18:05:02.557]               tier=frontend
I0211 18:05:02.557] Annotations:  <none>
I0211 18:05:02.557] Replicas:     3 current / 3 desired
I0211 18:05:02.557] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:05:02.558] Pod Template:
I0211 18:05:02.558]   Labels:  app=guestbook
I0211 18:05:02.558]            tier=frontend
I0211 18:05:02.558]   Containers:
I0211 18:05:02.558]    php-redis:
I0211 18:05:02.558]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0211 18:05:02.693] Namespace:    namespace-1549908299-23699
I0211 18:05:02.693] Selector:     app=guestbook,tier=frontend
I0211 18:05:02.693] Labels:       app=guestbook
I0211 18:05:02.693]               tier=frontend
I0211 18:05:02.694] Annotations:  <none>
I0211 18:05:02.694] Replicas:     3 current / 3 desired
I0211 18:05:02.694] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 18:05:02.694] Pod Template:
I0211 18:05:02.694]   Labels:  app=guestbook
I0211 18:05:02.694]            tier=frontend
I0211 18:05:02.694]   Containers:
I0211 18:05:02.694]    php-redis:
I0211 18:05:02.694]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0211 18:05:09.417] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0211 18:05:09.529] apps.sh:643: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0211 18:05:09.613] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0211 18:05:09.723] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0211 18:05:09.842] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0211 18:05:09.928] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0211 18:05:10.029] Error: required flag(s) "max" not set
W0211 18:05:10.029] 
W0211 18:05:10.030] 
W0211 18:05:10.030] Examples:
W0211 18:05:10.030]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0211 18:05:10.030]   kubectl autoscale deployment foo --min=2 --max=10
W0211 18:05:10.030]   
... skipping 88 lines ...
I0211 18:05:13.795] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 18:05:13.908] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0211 18:05:14.037] (Bstatefulset.apps/nginx rolled back
I0211 18:05:14.158] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0211 18:05:14.280] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 18:05:14.412] (BSuccessful
I0211 18:05:14.412] message:error: unable to find specified revision 1000000 in history
I0211 18:05:14.412] has:unable to find specified revision
I0211 18:05:14.528] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0211 18:05:14.645] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 18:05:14.763] (Bstatefulset.apps/nginx rolled back
I0211 18:05:14.876] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0211 18:05:14.982] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0211 18:05:17.204] Name:         mock
I0211 18:05:17.204] Namespace:    namespace-1549908316-13325
I0211 18:05:17.204] Selector:     app=mock
I0211 18:05:17.204] Labels:       app=mock
I0211 18:05:17.204] Annotations:  <none>
I0211 18:05:17.204] Replicas:     1 current / 1 desired
I0211 18:05:17.205] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 18:05:17.205] Pod Template:
I0211 18:05:17.205]   Labels:  app=mock
I0211 18:05:17.205]   Containers:
I0211 18:05:17.205]    mock-container:
I0211 18:05:17.205]     Image:        k8s.gcr.io/pause:2.0
I0211 18:05:17.205]     Port:         9949/TCP
... skipping 56 lines ...
I0211 18:05:19.854] Name:         mock
I0211 18:05:19.854] Namespace:    namespace-1549908316-13325
I0211 18:05:19.854] Selector:     app=mock
I0211 18:05:19.854] Labels:       app=mock
I0211 18:05:19.854] Annotations:  <none>
I0211 18:05:19.854] Replicas:     1 current / 1 desired
I0211 18:05:19.854] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 18:05:19.854] Pod Template:
I0211 18:05:19.854]   Labels:  app=mock
I0211 18:05:19.854]   Containers:
I0211 18:05:19.854]    mock-container:
I0211 18:05:19.855]     Image:        k8s.gcr.io/pause:2.0
I0211 18:05:19.855]     Port:         9949/TCP
... skipping 56 lines ...
I0211 18:05:22.561] Name:         mock
I0211 18:05:22.561] Namespace:    namespace-1549908316-13325
I0211 18:05:22.561] Selector:     app=mock
I0211 18:05:22.561] Labels:       app=mock
I0211 18:05:22.561] Annotations:  <none>
I0211 18:05:22.562] Replicas:     1 current / 1 desired
I0211 18:05:22.562] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 18:05:22.562] Pod Template:
I0211 18:05:22.562]   Labels:  app=mock
I0211 18:05:22.562]   Containers:
I0211 18:05:22.562]    mock-container:
I0211 18:05:22.562]     Image:        k8s.gcr.io/pause:2.0
I0211 18:05:22.562]     Port:         9949/TCP
... skipping 43 lines ...
I0211 18:05:25.158] Namespace:    namespace-1549908316-13325
I0211 18:05:25.158] Selector:     app=mock
I0211 18:05:25.159] Labels:       app=mock
I0211 18:05:25.159]               status=replaced
I0211 18:05:25.159] Annotations:  <none>
I0211 18:05:25.159] Replicas:     1 current / 1 desired
I0211 18:05:25.159] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 18:05:25.159] Pod Template:
I0211 18:05:25.159]   Labels:  app=mock
I0211 18:05:25.159]   Containers:
I0211 18:05:25.159]    mock-container:
I0211 18:05:25.159]     Image:        k8s.gcr.io/pause:2.0
I0211 18:05:25.159]     Port:         9949/TCP
... skipping 11 lines ...
I0211 18:05:25.166] Namespace:    namespace-1549908316-13325
I0211 18:05:25.166] Selector:     app=mock2
I0211 18:05:25.166] Labels:       app=mock2
I0211 18:05:25.166]               status=replaced
I0211 18:05:25.166] Annotations:  <none>
I0211 18:05:25.166] Replicas:     1 current / 1 desired
I0211 18:05:25.166] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 18:05:25.166] Pod Template:
I0211 18:05:25.167]   Labels:  app=mock2
I0211 18:05:25.167]   Containers:
I0211 18:05:25.167]    mock-container:
I0211 18:05:25.167]     Image:        k8s.gcr.io/pause:2.0
I0211 18:05:25.167]     Port:         9949/TCP
... skipping 104 lines ...
I0211 18:05:31.269] +++ [0211 18:05:31] Creating namespace namespace-1549908331-2236
I0211 18:05:31.353] namespace/namespace-1549908331-2236 created
I0211 18:05:31.441] Context "test" modified.
I0211 18:05:31.453] +++ [0211 18:05:31] Testing persistent volumes
I0211 18:05:31.570] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 18:05:31.758] (Bpersistentvolume/pv0001 created
W0211 18:05:31.859] E0211 18:05:31.764673   57378 pv_protection_controller.go:116] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I0211 18:05:31.960] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0211 18:05:31.974] (Bpersistentvolume "pv0001" deleted
I0211 18:05:32.172] persistentvolume/pv0002 created
W0211 18:05:32.274] E0211 18:05:32.175642   57378 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I0211 18:05:32.375] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0211 18:05:32.387] (Bpersistentvolume "pv0002" deleted
I0211 18:05:32.585] persistentvolume/pv0003 created
I0211 18:05:32.713] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0211 18:05:32.802] (Bpersistentvolume "pv0003" deleted
I0211 18:05:32.926] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 469 lines ...
I0211 18:05:38.617] yes
I0211 18:05:38.617] has:the server doesn't have a resource type
I0211 18:05:38.714] Successful
I0211 18:05:38.714] message:yes
I0211 18:05:38.714] has:yes
I0211 18:05:38.796] Successful
I0211 18:05:38.797] message:error: --subresource can not be used with NonResourceURL
I0211 18:05:38.797] has:subresource can not be used with NonResourceURL
I0211 18:05:38.898] Successful
I0211 18:05:38.999] Successful
I0211 18:05:39.000] message:yes
I0211 18:05:39.000] 0
I0211 18:05:39.000] has:0
... skipping 6 lines ...
I0211 18:05:39.235] role.rbac.authorization.k8s.io/testing-R reconciled
I0211 18:05:39.357] legacy-script.sh:745: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0211 18:05:39.469] (Blegacy-script.sh:746: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0211 18:05:39.585] (Blegacy-script.sh:747: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0211 18:05:39.698] (Blegacy-script.sh:748: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0211 18:05:39.794] (BSuccessful
I0211 18:05:39.794] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0211 18:05:39.795] has:only rbac.authorization.k8s.io/v1 is supported
I0211 18:05:39.901] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0211 18:05:39.907] role.rbac.authorization.k8s.io "testing-R" deleted
I0211 18:05:39.919] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0211 18:05:39.926] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0211 18:05:39.942] Recording: run_retrieve_multiple_tests
... skipping 1017 lines ...
I0211 18:06:11.303] message:node/127.0.0.1 already uncordoned (dry run)
I0211 18:06:11.303] has:already uncordoned
I0211 18:06:11.413] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0211 18:06:11.497] (Bnode/127.0.0.1 labeled
I0211 18:06:11.613] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0211 18:06:11.704] (BSuccessful
I0211 18:06:11.704] message:error: cannot specify both a node name and a --selector option
I0211 18:06:11.705] See 'kubectl drain -h' for help and examples
I0211 18:06:11.705] has:cannot specify both a node name
I0211 18:06:11.788] Successful
I0211 18:06:11.789] message:error: USAGE: cordon NODE [flags]
I0211 18:06:11.789] See 'kubectl cordon -h' for help and examples
I0211 18:06:11.789] has:error\: USAGE\: cordon NODE
I0211 18:06:11.877] node/127.0.0.1 already uncordoned
I0211 18:06:11.965] Successful
I0211 18:06:11.965] message:error: You must provide one or more resources by argument or filename.
I0211 18:06:11.965] Example resource specifications include:
I0211 18:06:11.965]    '-f rsrc.yaml'
I0211 18:06:11.965]    '--filename=rsrc.json'
I0211 18:06:11.965]    '<resource> <name>'
I0211 18:06:11.965]    '<resource>'
I0211 18:06:11.966] has:must provide one or more resources
... skipping 15 lines ...
I0211 18:06:12.519] Successful
I0211 18:06:12.519] message:The following compatible plugins are available:
I0211 18:06:12.519] 
I0211 18:06:12.519] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0211 18:06:12.519]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0211 18:06:12.519] 
I0211 18:06:12.519] error: one plugin warning was found
I0211 18:06:12.520] has:kubectl-version overwrites existing command: "kubectl version"
I0211 18:06:12.610] Successful
I0211 18:06:12.610] message:The following compatible plugins are available:
I0211 18:06:12.610] 
I0211 18:06:12.610] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0211 18:06:12.610] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0211 18:06:12.610]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0211 18:06:12.611] 
I0211 18:06:12.611] error: one plugin warning was found
I0211 18:06:12.611] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0211 18:06:12.699] Successful
I0211 18:06:12.699] message:The following compatible plugins are available:
I0211 18:06:12.699] 
I0211 18:06:12.699] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0211 18:06:12.699] has:plugins are available
I0211 18:06:12.790] Successful
I0211 18:06:12.790] message:
I0211 18:06:12.790] error: unable to find any kubectl plugins in your PATH
I0211 18:06:12.790] has:unable to find any kubectl plugins in your PATH
I0211 18:06:12.878] Successful
I0211 18:06:12.878] message:I am plugin foo
I0211 18:06:12.879] has:plugin foo
I0211 18:06:12.961] Successful
I0211 18:06:12.962] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.2.528+3f9b5b36eadaec", GitCommit:"3f9b5b36eadaec09e35db4085293ef88c9606ee9", GitTreeState:"clean", BuildDate:"2019-02-11T17:58:34Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0211 18:06:13.113] 
I0211 18:06:13.116] +++ Running case: test-cmd.run_impersonation_tests 
I0211 18:06:13.119] +++ working dir: /go/src/k8s.io/kubernetes
I0211 18:06:13.122] +++ command: run_impersonation_tests
I0211 18:06:13.135] +++ [0211 18:06:13] Testing impersonation
I0211 18:06:13.215] Successful
I0211 18:06:13.215] message:error: requesting groups or user-extra for  without impersonating a user
I0211 18:06:13.215] has:without impersonating a user
I0211 18:06:13.422] certificatesigningrequest.certificates.k8s.io/foo created
I0211 18:06:13.555] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0211 18:06:13.666] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0211 18:06:13.760] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0211 18:06:13.960] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 42 lines ...
W0211 18:06:17.335] I0211 18:06:17.335183   54033 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0211 18:06:17.335] I0211 18:06:17.334679   54033 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0211 18:06:17.336] I0211 18:06:17.333059   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.336] I0211 18:06:17.335228   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.336] I0211 18:06:17.329347   54033 autoregister_controller.go:160] Shutting down autoregister controller
W0211 18:06:17.336] I0211 18:06:17.335238   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.337] W0211 18:06:17.332060   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.337] I0211 18:06:17.335254   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.337] I0211 18:06:17.329363   54033 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
W0211 18:06:17.337] I0211 18:06:17.329373   54033 crd_finalizer.go:254] Shutting down CRDFinalizer
W0211 18:06:17.337] I0211 18:06:17.335310   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.338] I0211 18:06:17.329169   54033 available_controller.go:328] Shutting down AvailableConditionController
W0211 18:06:17.338] I0211 18:06:17.335332   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.338] I0211 18:06:17.329767   54033 controller.go:170] Shutting down kubernetes service endpoint reconciler
W0211 18:06:17.338] I0211 18:06:17.335346   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.338] I0211 18:06:17.335371   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.339] I0211 18:06:17.329797   54033 secure_serving.go:160] Stopped listening on 127.0.0.1:8080
W0211 18:06:17.339] I0211 18:06:17.329921   54033 secure_serving.go:160] Stopped listening on 127.0.0.1:6443
W0211 18:06:17.339] I0211 18:06:17.330223   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.339] I0211 18:06:17.330284   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.339] W0211 18:06:17.330294   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.340] I0211 18:06:17.330486   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.340] W0211 18:06:17.330586   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.340] I0211 18:06:17.330646   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.340] I0211 18:06:17.330716   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.341] I0211 18:06:17.330728   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.341] I0211 18:06:17.330788   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.341] I0211 18:06:17.330822   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.341] I0211 18:06:17.330848   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.341] I0211 18:06:17.330863   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.342] I0211 18:06:17.330895   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.342] I0211 18:06:17.330917   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.342] I0211 18:06:17.330941   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.342] W0211 18:06:17.330956   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.343] W0211 18:06:17.330956   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.343] I0211 18:06:17.331006   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.343] W0211 18:06:17.331037   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.343] W0211 18:06:17.331036   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.344] W0211 18:06:17.331061   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.344] W0211 18:06:17.331071   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.344] W0211 18:06:17.331103   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.344] I0211 18:06:17.331133   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.345] W0211 18:06:17.331166   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.345] W0211 18:06:17.331251   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.345] W0211 18:06:17.331401   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.345] I0211 18:06:17.331468   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.346] I0211 18:06:17.331478   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.346] I0211 18:06:17.331593   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.346] I0211 18:06:17.331606   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.346] I0211 18:06:17.331615   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.346] I0211 18:06:17.331629   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.347] W0211 18:06:17.331632   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.347] I0211 18:06:17.331641   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.347] I0211 18:06:17.331642   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.347] I0211 18:06:17.331663   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.347] I0211 18:06:17.331670   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.347] I0211 18:06:17.331690   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.348] I0211 18:06:17.331694   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 8 lines ...
W0211 18:06:17.349] I0211 18:06:17.331906   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.350] I0211 18:06:17.331925   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.350] I0211 18:06:17.331980   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.350] I0211 18:06:17.331983   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.350] I0211 18:06:17.332015   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.350] I0211 18:06:17.332090   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.351] W0211 18:06:17.332095   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.351] W0211 18:06:17.332125   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.351] W0211 18:06:17.332142   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.351] I0211 18:06:17.332157   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.351] W0211 18:06:17.332175   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.352] W0211 18:06:17.332177   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.352] I0211 18:06:17.332198   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.352] W0211 18:06:17.332221   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.353] W0211 18:06:17.332228   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.353] W0211 18:06:17.332229   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.353] I0211 18:06:17.332242   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.353] W0211 18:06:17.332251   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.353] W0211 18:06:17.332261   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.354] W0211 18:06:17.332272   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.354] W0211 18:06:17.332282   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.354] W0211 18:06:17.332289   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.355] W0211 18:06:17.332283   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.355] W0211 18:06:17.332320   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.355] W0211 18:06:17.332321   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.355] W0211 18:06:17.332355   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.356] W0211 18:06:17.332358   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.356] W0211 18:06:17.332360   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.356] W0211 18:06:17.332390   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.356] W0211 18:06:17.332397   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.357] W0211 18:06:17.332402   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.357] W0211 18:06:17.332418   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.357] W0211 18:06:17.332428   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.357] W0211 18:06:17.332450   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.358] W0211 18:06:17.332453   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.358] W0211 18:06:17.332470   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.358] I0211 18:06:17.332656   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.359] I0211 18:06:17.332669   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.359] W0211 18:06:17.332790   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.360] I0211 18:06:17.332869   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.360] I0211 18:06:17.332931   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.360] I0211 18:06:17.332969   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.360] I0211 18:06:17.332975   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.361] I0211 18:06:17.333091   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.361] I0211 18:06:17.333223   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.361] I0211 18:06:17.333271   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.361] I0211 18:06:17.333293   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.361] W0211 18:06:17.333349   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.361] W0211 18:06:17.333387   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.362] W0211 18:06:17.333397   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.362] W0211 18:06:17.333425   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.362] W0211 18:06:17.333451   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.362] W0211 18:06:17.333458   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.362] W0211 18:06:17.333514   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.363] W0211 18:06:17.333530   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.363] W0211 18:06:17.333548   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.363] W0211 18:06:17.333596   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.363] I0211 18:06:17.333633   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.363] W0211 18:06:17.333639   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.364] I0211 18:06:17.333656   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.364] I0211 18:06:17.333665   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.364] I0211 18:06:17.333681   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.364] W0211 18:06:17.333734   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.364] I0211 18:06:17.333759   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.364] W0211 18:06:17.333759   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.365] W0211 18:06:17.333782   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.365] W0211 18:06:17.333821   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.365] I0211 18:06:17.333835   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.365] I0211 18:06:17.333862   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.365] I0211 18:06:17.333883   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.365] I0211 18:06:17.333906   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.365] I0211 18:06:17.333929   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.366] W0211 18:06:17.333988   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.366] W0211 18:06:17.334076   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.366] W0211 18:06:17.334109   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.366] W0211 18:06:17.334147   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.367] W0211 18:06:17.334179   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.367] W0211 18:06:17.334196   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.367] W0211 18:06:17.334223   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.367] W0211 18:06:17.334248   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.367] W0211 18:06:17.334254   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.368] W0211 18:06:17.334281   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.368] W0211 18:06:17.334289   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.368] W0211 18:06:17.334294   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.368] W0211 18:06:17.334327   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.368] W0211 18:06:17.334329   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.368] I0211 18:06:17.334468   54033 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0211 18:06:17.369] W0211 18:06:17.334808   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.369] W0211 18:06:17.334921   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.369] I0211 18:06:17.335060   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.369] W0211 18:06:17.335076   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.369] I0211 18:06:17.335099   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.369] I0211 18:06:17.335170   54033 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0211 18:06:17.370] I0211 18:06:17.335178   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.370] I0211 18:06:17.329356   54033 naming_controller.go:295] Shutting down NamingConditionController
W0211 18:06:17.370] I0211 18:06:17.335382   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.370] I0211 18:06:17.335884   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 65 lines ...
W0211 18:06:17.379] I0211 18:06:17.339720   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.380] I0211 18:06:17.339762   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.380] I0211 18:06:17.347814   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0211 18:06:17.380] I0211 18:06:17.347928   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.380] W0211 18:06:17.348060   54033 clientconn.go:1440] grpc: addrConn.transportMonitor exits due to: context canceled
W0211 18:06:17.380] I0211 18:06:17.348194   54033 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 18:06:17.380] W0211 18:06:17.348263   54033 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 18:06:17.471] make: *** [test-cmd] Error 1
I0211 18:06:17.571] No resources found
I0211 18:06:17.571] No resources found
I0211 18:06:17.571] FAILED TESTS: run_crd_tests, 
I0211 18:06:17.572] junit report dir: /workspace/artifacts
I0211 18:06:17.572] +++ [0211 18:06:17] Clean up complete
I0211 18:06:17.572] Makefile:294: recipe for target 'test-cmd' failed
W0211 18:06:20.667] Traceback (most recent call last):
W0211 18:06:20.667]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0211 18:06:20.667]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0211 18:06:20.668]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0211 18:06:20.668]     check(*cmd)
W0211 18:06:20.668]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0211 18:06:20.668]     subprocess.check_call(cmd)
W0211 18:06:20.668]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0211 18:06:20.668]     raise CalledProcessError(retcode, cmd)
W0211 18:06:20.669] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20190125-cc5d6ecff3', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0211 18:06:20.678] Command failed
I0211 18:06:20.678] process 673 exited with code 1 after 14.1m
E0211 18:06:20.679] FAIL: pull-kubernetes-integration
I0211 18:06:20.680] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0211 18:06:21.394] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0211 18:06:21.445] process 96677 exited with code 0 after 0.0m
I0211 18:06:21.446] Call:  gcloud config get-value account
I0211 18:06:21.771] process 96689 exited with code 0 after 0.0m
I0211 18:06:21.772] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0211 18:06:21.772] Upload result and artifacts...
I0211 18:06:21.772] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73805/pull-kubernetes-integration/44341
I0211 18:06:21.772] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73805/pull-kubernetes-integration/44341/artifacts
W0211 18:06:23.097] CommandException: One or more URLs matched no objects.
E0211 18:06:23.272] Command failed
I0211 18:06:23.273] process 96701 exited with code 1 after 0.0m
W0211 18:06:23.273] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73805/pull-kubernetes-integration/44341/artifacts not exist yet
I0211 18:06:23.273] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73805/pull-kubernetes-integration/44341/artifacts
I0211 18:06:25.513] process 96843 exited with code 0 after 0.0m
W0211 18:06:25.513] metadata path /workspace/_artifacts/metadata.json does not exist
W0211 18:06:25.513] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...