This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 89 succeeded
Started2019-05-14 09:31
Elapsed12m59s
Revision
Buildergke-prow-containerd-pool-99179761-1b0c
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/e3d429b8-8ee1-483d-a9c7-c6dc38cbf9a2/targets/test'}}
pode1069188-762a-11e9-b740-0a580a6c086f
resultstorehttps://source.cloud.google.com/results/invocations/e3d429b8-8ee1-483d-a9c7-c6dc38cbf9a2/targets/test
infra-commit1a3739a09
pode1069188-762a-11e9-b740-0a580a6c086f
repok8s.io/kubernetes
repo-commita1eaacd59bec0b6cb02544c9122c11efe9569c9b
repos{u'k8s.io/kubernetes': u'master'}

No Test Failures!


Show 89 Passed Tests

Error lines from build-log.txt

... skipping 304 lines ...
W0514 09:40:36.664] I0514 09:40:36.663635   47231 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0514 09:40:36.664] I0514 09:40:36.663725   47231 server.go:558] external host was not specified, using 172.17.0.2
W0514 09:40:36.665] W0514 09:40:36.663746   47231 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0514 09:40:36.665] I0514 09:40:36.664323   47231 server.go:145] Version: v1.15.0-alpha.3.325+a1eaacd59bec0b
W0514 09:40:37.560] I0514 09:40:37.560038   47231 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0514 09:40:37.561] I0514 09:40:37.560077   47231 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0514 09:40:37.561] E0514 09:40:37.560567   47231 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:37.561] E0514 09:40:37.560606   47231 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:37.561] E0514 09:40:37.560649   47231 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:37.562] E0514 09:40:37.560673   47231 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:37.562] E0514 09:40:37.560702   47231 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:37.562] E0514 09:40:37.560728   47231 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:37.562] E0514 09:40:37.560754   47231 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:37.562] E0514 09:40:37.560772   47231 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:37.562] E0514 09:40:37.560833   47231 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:37.563] E0514 09:40:37.560988   47231 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:37.563] E0514 09:40:37.561032   47231 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:37.563] E0514 09:40:37.561075   47231 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:37.563] I0514 09:40:37.561106   47231 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0514 09:40:37.563] I0514 09:40:37.561116   47231 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0514 09:40:37.563] I0514 09:40:37.562714   47231 client.go:354] parsed scheme: ""
W0514 09:40:37.564] I0514 09:40:37.562738   47231 client.go:354] scheme "" not registered, fallback to default scheme
W0514 09:40:37.564] I0514 09:40:37.562800   47231 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0514 09:40:37.564] I0514 09:40:37.563036   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 361 lines ...
W0514 09:40:38.170] W0514 09:40:38.169875   47231 genericapiserver.go:347] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0514 09:40:38.559] I0514 09:40:38.558881   47231 client.go:354] parsed scheme: ""
W0514 09:40:38.559] I0514 09:40:38.558929   47231 client.go:354] scheme "" not registered, fallback to default scheme
W0514 09:40:38.560] I0514 09:40:38.558989   47231 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0514 09:40:38.560] I0514 09:40:38.559038   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:40:38.560] I0514 09:40:38.559491   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:40:39.124] E0514 09:40:39.123257   47231 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:39.125] E0514 09:40:39.123352   47231 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:39.125] E0514 09:40:39.123381   47231 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:39.125] E0514 09:40:39.123419   47231 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:39.125] E0514 09:40:39.123461   47231 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:39.125] E0514 09:40:39.123484   47231 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:39.126] E0514 09:40:39.123563   47231 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:39.126] E0514 09:40:39.123586   47231 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:39.126] E0514 09:40:39.123658   47231 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:39.126] E0514 09:40:39.123758   47231 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:39.127] E0514 09:40:39.123792   47231 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:39.127] E0514 09:40:39.123842   47231 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0514 09:40:39.127] I0514 09:40:39.123879   47231 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0514 09:40:39.127] I0514 09:40:39.123910   47231 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0514 09:40:39.127] I0514 09:40:39.125358   47231 client.go:354] parsed scheme: ""
W0514 09:40:39.127] I0514 09:40:39.125383   47231 client.go:354] scheme "" not registered, fallback to default scheme
W0514 09:40:39.127] I0514 09:40:39.125440   47231 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0514 09:40:39.128] I0514 09:40:39.125497   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 73 lines ...
W0514 09:41:21.401] I0514 09:41:21.398624   50569 controller_utils.go:1029] Waiting for caches to sync for ReplicationController controller
W0514 09:41:21.407] I0514 09:41:21.407451   50569 controllermanager.go:523] Started "namespace"
W0514 09:41:21.408] I0514 09:41:21.407491   50569 namespace_controller.go:186] Starting namespace controller
W0514 09:41:21.408] I0514 09:41:21.407523   50569 controller_utils.go:1029] Waiting for caches to sync for namespace controller
W0514 09:41:21.408] I0514 09:41:21.408522   50569 controllermanager.go:523] Started "csrcleaner"
W0514 09:41:21.409] I0514 09:41:21.408992   50569 node_lifecycle_controller.go:77] Sending events to api server
W0514 09:41:21.409] E0514 09:41:21.409100   50569 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
W0514 09:41:21.409] W0514 09:41:21.409109   50569 controllermanager.go:515] Skipping "cloud-node-lifecycle"
W0514 09:41:21.410] I0514 09:41:21.409951   50569 cleaner.go:81] Starting CSR cleaner controller
W0514 09:41:21.412] I0514 09:41:21.411466   50569 controllermanager.go:523] Started "persistentvolume-binder"
W0514 09:41:21.412] I0514 09:41:21.411492   50569 pv_controller_base.go:271] Starting persistent volume controller
W0514 09:41:21.412] I0514 09:41:21.411562   50569 controller_utils.go:1029] Waiting for caches to sync for persistent volume controller
W0514 09:41:21.413] I0514 09:41:21.413399   50569 controllermanager.go:523] Started "statefulset"
W0514 09:41:21.414] I0514 09:41:21.413639   50569 stateful_set.go:145] Starting stateful set controller
W0514 09:41:21.414] I0514 09:41:21.413743   50569 controller_utils.go:1029] Waiting for caches to sync for stateful set controller
W0514 09:41:21.416] E0514 09:41:21.415720   50569 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0514 09:41:21.416] W0514 09:41:21.415741   50569 controllermanager.go:515] Skipping "service"
W0514 09:41:21.416] W0514 09:41:21.416241   50569 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0514 09:41:21.417] I0514 09:41:21.416951   50569 controllermanager.go:523] Started "attachdetach"
W0514 09:41:21.417] I0514 09:41:21.416984   50569 attach_detach_controller.go:335] Starting attach detach controller
W0514 09:41:21.417] I0514 09:41:21.417105   50569 controller_utils.go:1029] Waiting for caches to sync for attach detach controller
W0514 09:41:21.417] I0514 09:41:21.417420   50569 controllermanager.go:523] Started "clusterrole-aggregation"
... skipping 79 lines ...
W0514 09:41:22.047] I0514 09:41:22.046899   50569 expand_controller.go:153] Starting expand controller
W0514 09:41:22.047] I0514 09:41:22.046925   50569 controller_utils.go:1029] Waiting for caches to sync for expand controller
W0514 09:41:22.047] I0514 09:41:22.047395   50569 controllermanager.go:523] Started "pv-protection"
W0514 09:41:22.048] I0514 09:41:22.047456   50569 pv_protection_controller.go:82] Starting PV protection controller
W0514 09:41:22.048] I0514 09:41:22.047481   50569 controller_utils.go:1029] Waiting for caches to sync for PV protection controller
W0514 09:41:22.100] I0514 09:41:22.100196   50569 controller_utils.go:1036] Caches are synced for TTL controller
W0514 09:41:22.102] W0514 09:41:22.102159   50569 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0514 09:41:22.127] I0514 09:41:22.126620   50569 controller_utils.go:1036] Caches are synced for job controller
W0514 09:41:22.127] I0514 09:41:22.126724   50569 controller_utils.go:1036] Caches are synced for GC controller
W0514 09:41:22.129] I0514 09:41:22.129337   50569 controller_utils.go:1036] Caches are synced for PVC protection controller
W0514 09:41:22.132] I0514 09:41:22.132007   50569 controller_utils.go:1036] Caches are synced for HPA controller
W0514 09:41:22.137] I0514 09:41:22.137461   50569 controller_utils.go:1036] Caches are synced for ReplicaSet controller
W0514 09:41:22.143] I0514 09:41:22.142889   50569 controller_utils.go:1036] Caches are synced for endpoint controller
... skipping 24 lines ...
I0514 09:41:22.321]   "compiler": "gc",
I0514 09:41:22.321]   "platform": "linux/amd64"
I0514 09:41:22.460] }+++ [0514 09:41:22] Testing kubectl version: check client only output matches expected output
W0514 09:41:22.561] I0514 09:41:22.411794   50569 controller_utils.go:1036] Caches are synced for persistent volume controller
W0514 09:41:22.561] I0514 09:41:22.514011   50569 controller_utils.go:1036] Caches are synced for stateful set controller
W0514 09:41:22.561] I0514 09:41:22.517993   50569 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0514 09:41:22.561] E0514 09:41:22.530582   50569 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0514 09:41:22.562] E0514 09:41:22.531344   50569 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0514 09:41:22.562] E0514 09:41:22.536965   50569 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0514 09:41:22.562] I0514 09:41:22.538447   50569 controller_utils.go:1036] Caches are synced for disruption controller
W0514 09:41:22.562] I0514 09:41:22.538479   50569 disruption.go:294] Sending events to api server.
W0514 09:41:22.563] E0514 09:41:22.548823   50569 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0514 09:41:22.597] I0514 09:41:22.596619   50569 controller_utils.go:1036] Caches are synced for service account controller
W0514 09:41:22.600] I0514 09:41:22.599832   47231 controller.go:606] quota admission added evaluator for: serviceaccounts
W0514 09:41:22.608] I0514 09:41:22.607774   50569 controller_utils.go:1036] Caches are synced for namespace controller
I0514 09:41:22.708] Successful: the flag '--client' shows correct client info
I0514 09:41:22.709] (BSuccessful: the flag '--client' correctly has no server version info
I0514 09:41:22.709] (B+++ [0514 09:41:22] Testing kubectl version: verify json output
... skipping 66 lines ...
I0514 09:41:26.258] +++ working dir: /go/src/k8s.io/kubernetes
I0514 09:41:26.263] +++ command: run_RESTMapper_evaluation_tests
I0514 09:41:26.278] +++ [0514 09:41:26] Creating namespace namespace-1557826886-22380
I0514 09:41:26.358] namespace/namespace-1557826886-22380 created
I0514 09:41:26.437] Context "test" modified.
I0514 09:41:26.448] +++ [0514 09:41:26] Testing RESTMapper
I0514 09:41:26.580] +++ [0514 09:41:26] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0514 09:41:26.604] +++ exit code: 0
I0514 09:41:26.758] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0514 09:41:26.759] bindings                                                                      true         Binding
I0514 09:41:26.759] componentstatuses                 cs                                          false        ComponentStatus
I0514 09:41:26.759] configmaps                        cm                                          true         ConfigMap
I0514 09:41:26.759] endpoints                         ep                                          true         Endpoints
... skipping 640 lines ...
I0514 09:41:48.587] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0514 09:41:48.791] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0514 09:41:48.899] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0514 09:41:49.080] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0514 09:41:49.189] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0514 09:41:49.297] (Bpod "valid-pod" force deleted
W0514 09:41:49.397] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0514 09:41:49.398] error: setting 'all' parameter but found a non empty selector. 
W0514 09:41:49.398] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0514 09:41:49.498] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0514 09:41:49.552] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0514 09:41:49.639] (Bnamespace/test-kubectl-describe-pod created
I0514 09:41:49.751] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0514 09:41:49.857] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0514 09:41:51.066] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0514 09:41:51.187] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0514 09:41:51.277] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0514 09:41:51.392] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0514 09:41:51.582] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:41:51.820] (Bpod/env-test-pod created
W0514 09:41:51.920] error: min-available and max-unavailable cannot be both specified
I0514 09:41:52.091] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0514 09:41:52.092] Name:         env-test-pod
I0514 09:41:52.092] Namespace:    test-kubectl-describe-pod
I0514 09:41:52.092] Priority:     0
I0514 09:41:52.092] Node:         <none>
I0514 09:41:52.092] Labels:       <none>
... skipping 143 lines ...
I0514 09:42:06.327] (Bservice "modified" deleted
I0514 09:42:06.452] replicationcontroller "modified" deleted
I0514 09:42:06.872] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:42:07.106] (Bpod/valid-pod created
I0514 09:42:07.242] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0514 09:42:07.438] (BSuccessful
I0514 09:42:07.438] message:Error from server: cannot restore map from string
I0514 09:42:07.438] has:cannot restore map from string
W0514 09:42:07.539] E0514 09:42:07.423009   47231 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0514 09:42:07.639] Successful
I0514 09:42:07.639] message:pod/valid-pod patched (no change)
I0514 09:42:07.640] has:patched (no change)
I0514 09:42:07.650] pod/valid-pod patched
I0514 09:42:07.772] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0514 09:42:07.893] (Bcore.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
... skipping 4 lines ...
I0514 09:42:08.436] (Bpod/valid-pod patched
I0514 09:42:08.563] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0514 09:42:08.655] (Bpod/valid-pod patched
I0514 09:42:08.785] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0514 09:42:08.981] (Bpod/valid-pod patched
I0514 09:42:09.119] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0514 09:42:09.351] (B+++ [0514 09:42:09] "kubectl patch with resourceVersion 507" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0514 09:42:09.707] pod "valid-pod" deleted
I0514 09:42:09.722] pod/valid-pod replaced
I0514 09:42:09.902] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0514 09:42:10.188] (BSuccessful
I0514 09:42:10.189] message:error: --grace-period must have --force specified
I0514 09:42:10.189] has:\-\-grace-period must have \-\-force specified
I0514 09:42:10.426] Successful
I0514 09:42:10.427] message:error: --timeout must have --force specified
I0514 09:42:10.427] has:\-\-timeout must have \-\-force specified
I0514 09:42:10.677] node/node-v1-test created
W0514 09:42:10.777] W0514 09:42:10.677353   50569 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0514 09:42:10.937] node/node-v1-test replaced
I0514 09:42:11.081] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0514 09:42:11.173] (Bnode "node-v1-test" deleted
I0514 09:42:11.302] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0514 09:42:11.688] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0514 09:42:13.124] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 16 lines ...
I0514 09:42:13.413]     name: kubernetes-pause
I0514 09:42:13.413] has:localonlyvalue
I0514 09:42:13.447] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0514 09:42:13.643] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0514 09:42:13.746] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0514 09:42:13.838] (Bpod/valid-pod labeled
W0514 09:42:13.938] error: 'name' already has a value (valid-pod), and --overwrite is false
I0514 09:42:14.039] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0514 09:42:14.045] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0514 09:42:14.136] (Bpod "valid-pod" force deleted
W0514 09:42:14.237] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0514 09:42:14.337] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:42:14.337] (B+++ [0514 09:42:14] Creating namespace namespace-1557826934-7319
... skipping 82 lines ...
I0514 09:42:23.208] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0514 09:42:23.211] +++ working dir: /go/src/k8s.io/kubernetes
I0514 09:42:23.213] +++ command: run_kubectl_create_error_tests
I0514 09:42:23.232] +++ [0514 09:42:23] Creating namespace namespace-1557826943-12737
I0514 09:42:23.316] namespace/namespace-1557826943-12737 created
I0514 09:42:23.396] Context "test" modified.
I0514 09:42:23.406] +++ [0514 09:42:23] Testing kubectl create with error
W0514 09:42:23.507] Error: must specify one of -f and -k
W0514 09:42:23.507] 
W0514 09:42:23.507] Create a resource from a file or from stdin.
W0514 09:42:23.507] 
W0514 09:42:23.507]  JSON and YAML formats are accepted.
W0514 09:42:23.507] 
W0514 09:42:23.508] Examples:
... skipping 41 lines ...
W0514 09:42:23.512] 
W0514 09:42:23.512] Usage:
W0514 09:42:23.512]   kubectl create -f FILENAME [options]
W0514 09:42:23.512] 
W0514 09:42:23.512] Use "kubectl <command> --help" for more information about a given command.
W0514 09:42:23.513] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0514 09:42:23.730] +++ [0514 09:42:23] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0514 09:42:23.831] kubectl convert is DEPRECATED and will be removed in a future version.
W0514 09:42:23.831] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0514 09:42:23.948] +++ exit code: 0
I0514 09:42:24.017] Recording: run_kubectl_apply_tests
I0514 09:42:24.017] Running command: run_kubectl_apply_tests
I0514 09:42:24.053] 
... skipping 20 lines ...
W0514 09:42:26.999] I0514 09:42:26.998301   47231 client.go:354] scheme "" not registered, fallback to default scheme
W0514 09:42:26.999] I0514 09:42:26.998335   47231 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0514 09:42:26.999] I0514 09:42:26.998373   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:42:27.000] I0514 09:42:26.998958   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:42:27.001] I0514 09:42:27.001398   47231 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0514 09:42:27.102] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0514 09:42:27.203] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0514 09:42:27.304] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0514 09:42:27.304] +++ exit code: 0
I0514 09:42:27.332] Recording: run_kubectl_run_tests
I0514 09:42:27.332] Running command: run_kubectl_run_tests
I0514 09:42:27.369] 
I0514 09:42:27.373] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 95 lines ...
I0514 09:42:30.548] Context "test" modified.
I0514 09:42:30.559] +++ [0514 09:42:30] Testing kubectl create filter
I0514 09:42:30.680] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:42:30.945] (Bpod/selector-test-pod created
I0514 09:42:31.101] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0514 09:42:31.209] (BSuccessful
I0514 09:42:31.210] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0514 09:42:31.210] has:pods "selector-test-pod-dont-apply" not found
I0514 09:42:31.302] pod "selector-test-pod" deleted
I0514 09:42:31.333] +++ exit code: 0
I0514 09:42:31.407] Recording: run_kubectl_apply_deployments_tests
I0514 09:42:31.407] Running command: run_kubectl_apply_deployments_tests
I0514 09:42:31.435] 
... skipping 26 lines ...
I0514 09:42:33.716] (Bapps.sh:131: Successful get deployments my-depl {{.metadata.labels.l2}}: l2
I0514 09:42:33.823] (Bdeployment.extensions "my-depl" deleted
I0514 09:42:33.832] replicaset.extensions "my-depl-588655868c" deleted
I0514 09:42:33.840] replicaset.extensions "my-depl-69cd868dd5" deleted
I0514 09:42:33.848] pod "my-depl-588655868c-2j4z5" deleted
I0514 09:42:33.851] pod "my-depl-69cd868dd5-v9dg4" deleted
W0514 09:42:33.952] E0514 09:42:33.857460   50569 replica_set.go:450] Sync "namespace-1557826951-31447/my-depl-588655868c" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-588655868c": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1557826951-31447/my-depl-588655868c, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8bf371fd-8350-4128-9a24-fdc9410f9a8d, UID in object meta: 
W0514 09:42:33.953] E0514 09:42:33.862157   50569 replica_set.go:450] Sync "namespace-1557826951-31447/my-depl-69cd868dd5" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-69cd868dd5": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1557826951-31447/my-depl-69cd868dd5, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 0db60c15-a01b-45c8-9721-763887619e42, UID in object meta: 
I0514 09:42:34.054] apps.sh:137: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:42:34.123] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:42:34.240] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:42:34.348] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:42:34.578] (Bdeployment.extensions/nginx created
W0514 09:42:34.679] I0514 09:42:34.584343   50569 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557826951-31447", Name:"nginx", UID:"4fa275e6-df83-44ed-bf1a-5fbe861dff9d", APIVersion:"apps/v1", ResourceVersion:"606", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8c9ccf86d to 3
W0514 09:42:34.680] I0514 09:42:34.589256   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557826951-31447", Name:"nginx-8c9ccf86d", UID:"38d4ebeb-d00d-4b90-92bb-114959782d03", APIVersion:"apps/v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-js44q
W0514 09:42:34.680] I0514 09:42:34.593774   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557826951-31447", Name:"nginx-8c9ccf86d", UID:"38d4ebeb-d00d-4b90-92bb-114959782d03", APIVersion:"apps/v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-tckng
W0514 09:42:34.680] I0514 09:42:34.596575   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557826951-31447", Name:"nginx-8c9ccf86d", UID:"38d4ebeb-d00d-4b90-92bb-114959782d03", APIVersion:"apps/v1", ResourceVersion:"607", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-btpwx
I0514 09:42:34.781] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0514 09:42:39.068] (BSuccessful
I0514 09:42:39.068] message:Error from server (Conflict): error when applying patch:
I0514 09:42:39.069] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557826951-31447\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0514 09:42:39.069] to:
I0514 09:42:39.069] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0514 09:42:39.069] Name: "nginx", Namespace: "namespace-1557826951-31447"
I0514 09:42:39.071] Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557826951-31447\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-05-14T09:42:34Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-05-14T09:42:34Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-05-14T09:42:34Z"]] "name":"nginx" "namespace":"namespace-1557826951-31447" "resourceVersion":"619" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1557826951-31447/deployments/nginx" "uid":"4fa275e6-df83-44ed-bf1a-5fbe861dff9d"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-05-14T09:42:34Z" "lastUpdateTime":"2019-05-14T09:42:34Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0514 09:42:39.071] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0514 09:42:39.071] has:Error from server (Conflict)
W0514 09:42:39.172] I0514 09:42:37.459656   50569 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1557826939-30
W0514 09:42:43.422] I0514 09:42:43.421398   47231 controller.go:606] quota admission added evaluator for: replicasets.extensions
I0514 09:42:44.390] deployment.extensions/nginx configured
W0514 09:42:44.491] I0514 09:42:44.396395   50569 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557826951-31447", Name:"nginx", UID:"21cfbdb8-063b-4bfb-8151-c48e838c9407", APIVersion:"apps/v1", ResourceVersion:"643", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-86bb9b4d9f to 3
W0514 09:42:44.491] I0514 09:42:44.400128   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557826951-31447", Name:"nginx-86bb9b4d9f", UID:"2d8adec4-6897-4f8d-9eff-ced17cec42fe", APIVersion:"apps/v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-n625c
W0514 09:42:44.492] I0514 09:42:44.404994   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557826951-31447", Name:"nginx-86bb9b4d9f", UID:"2d8adec4-6897-4f8d-9eff-ced17cec42fe", APIVersion:"apps/v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-pd2gc
W0514 09:42:44.492] I0514 09:42:44.405981   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557826951-31447", Name:"nginx-86bb9b4d9f", UID:"2d8adec4-6897-4f8d-9eff-ced17cec42fe", APIVersion:"apps/v1", ResourceVersion:"644", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-rbsm9
I0514 09:42:44.592] Successful
I0514 09:42:44.593] message:        "name": "nginx2"
I0514 09:42:44.593]           "name": "nginx2"
I0514 09:42:44.593] has:"name": "nginx2"
W0514 09:42:48.989] E0514 09:42:48.988086   50569 replica_set.go:450] Sync "namespace-1557826951-31447/nginx-86bb9b4d9f" failed with Operation cannot be fulfilled on replicasets.apps "nginx-86bb9b4d9f": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1557826951-31447/nginx-86bb9b4d9f, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 2d8adec4-6897-4f8d-9eff-ced17cec42fe, UID in object meta: 
W0514 09:42:49.919] I0514 09:42:49.919144   50569 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557826951-31447", Name:"nginx", UID:"c6e57280-ccd0-4ce1-81ef-b95c5dd3beb1", APIVersion:"apps/v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-86bb9b4d9f to 3
W0514 09:42:49.924] I0514 09:42:49.923781   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557826951-31447", Name:"nginx-86bb9b4d9f", UID:"41d30b75-3a05-44a1-b7e0-6497e3078e1e", APIVersion:"apps/v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-7ljk8
W0514 09:42:49.928] I0514 09:42:49.928252   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557826951-31447", Name:"nginx-86bb9b4d9f", UID:"41d30b75-3a05-44a1-b7e0-6497e3078e1e", APIVersion:"apps/v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-97cnr
W0514 09:42:49.930] I0514 09:42:49.929649   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557826951-31447", Name:"nginx-86bb9b4d9f", UID:"41d30b75-3a05-44a1-b7e0-6497e3078e1e", APIVersion:"apps/v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-dhrbm
I0514 09:42:50.030] Successful
I0514 09:42:50.031] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 159 lines ...
I0514 09:42:52.785] +++ [0514 09:42:52] Creating namespace namespace-1557826972-15577
I0514 09:42:52.872] namespace/namespace-1557826972-15577 created
I0514 09:42:52.954] Context "test" modified.
I0514 09:42:52.967] +++ [0514 09:42:52] Testing kubectl get
I0514 09:42:53.081] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:42:53.193] (BSuccessful
I0514 09:42:53.193] message:Error from server (NotFound): pods "abc" not found
I0514 09:42:53.194] has:pods "abc" not found
I0514 09:42:53.307] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:42:53.429] (BSuccessful
I0514 09:42:53.429] message:Error from server (NotFound): pods "abc" not found
I0514 09:42:53.429] has:pods "abc" not found
I0514 09:42:53.538] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:42:53.650] (BSuccessful
I0514 09:42:53.650] message:{
I0514 09:42:53.650]     "apiVersion": "v1",
I0514 09:42:53.651]     "items": [],
... skipping 23 lines ...
I0514 09:42:54.061] has not:No resources found
I0514 09:42:54.160] Successful
I0514 09:42:54.161] message:NAME
I0514 09:42:54.161] has not:No resources found
I0514 09:42:54.271] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:42:54.411] (BSuccessful
I0514 09:42:54.411] message:error: the server doesn't have a resource type "foobar"
I0514 09:42:54.412] has not:No resources found
I0514 09:42:54.513] Successful
I0514 09:42:54.514] message:No resources found.
I0514 09:42:54.514] has:No resources found
I0514 09:42:54.620] Successful
I0514 09:42:54.620] message:
I0514 09:42:54.620] has not:No resources found
I0514 09:42:54.723] Successful
I0514 09:42:54.724] message:No resources found.
I0514 09:42:54.724] has:No resources found
I0514 09:42:54.833] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:42:54.933] (BSuccessful
I0514 09:42:54.934] message:Error from server (NotFound): pods "abc" not found
I0514 09:42:54.934] has:pods "abc" not found
I0514 09:42:54.937] FAIL!
I0514 09:42:54.937] message:Error from server (NotFound): pods "abc" not found
I0514 09:42:54.937] has not:List
I0514 09:42:54.937] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0514 09:42:55.075] Successful
I0514 09:42:55.075] message:I0514 09:42:55.014382   61279 loader.go:359] Config loaded from file:  /tmp/tmp.DbZq7NURbW/.kube/config
I0514 09:42:55.075] I0514 09:42:55.016014   61279 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0514 09:42:55.075] I0514 09:42:55.043018   61279 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 888 lines ...
I0514 09:43:00.922] Successful
I0514 09:43:00.922] message:NAME    DATA   AGE
I0514 09:43:00.922] one     0      0s
I0514 09:43:00.922] three   0      0s
I0514 09:43:00.922] two     0      0s
I0514 09:43:00.923] STATUS    REASON          MESSAGE
I0514 09:43:00.923] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0514 09:43:00.923] has not:watch is only supported on individual resources
I0514 09:43:02.047] Successful
I0514 09:43:02.047] message:STATUS    REASON          MESSAGE
I0514 09:43:02.048] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0514 09:43:02.048] has not:watch is only supported on individual resources
I0514 09:43:02.056] +++ [0514 09:43:02] Creating namespace namespace-1557826982-27403
I0514 09:43:02.146] namespace/namespace-1557826982-27403 created
I0514 09:43:02.253] Context "test" modified.
I0514 09:43:02.386] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:43:02.642] (Bpod/valid-pod created
... skipping 104 lines ...
I0514 09:43:02.764] }
I0514 09:43:02.885] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0514 09:43:03.199] (B<no value>Successful
I0514 09:43:03.200] message:valid-pod:
I0514 09:43:03.200] has:valid-pod:
I0514 09:43:03.312] Successful
I0514 09:43:03.312] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0514 09:43:03.312] 	template was:
I0514 09:43:03.312] 		{.missing}
I0514 09:43:03.312] 	object given to jsonpath engine was:
I0514 09:43:03.314] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-05-14T09:43:02Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-05-14T09:43:02Z"}}, "name":"valid-pod", "namespace":"namespace-1557826982-27403", "resourceVersion":"717", "selfLink":"/api/v1/namespaces/namespace-1557826982-27403/pods/valid-pod", "uid":"fbd3dd20-95c9-487b-9443-14bdce713179"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0514 09:43:03.314] has:missing is not found
W0514 09:43:03.414] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0514 09:43:03.515] Successful
I0514 09:43:03.515] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0514 09:43:03.516] 	template was:
I0514 09:43:03.516] 		{{.missing}}
I0514 09:43:03.516] 	raw data was:
I0514 09:43:03.517] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-05-14T09:43:02Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-05-14T09:43:02Z"}],"name":"valid-pod","namespace":"namespace-1557826982-27403","resourceVersion":"717","selfLink":"/api/v1/namespaces/namespace-1557826982-27403/pods/valid-pod","uid":"fbd3dd20-95c9-487b-9443-14bdce713179"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0514 09:43:03.517] 	object given to template engine was:
I0514 09:43:03.518] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-05-14T09:43:02Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-05-14T09:43:02Z]] name:valid-pod namespace:namespace-1557826982-27403 resourceVersion:717 selfLink:/api/v1/namespaces/namespace-1557826982-27403/pods/valid-pod uid:fbd3dd20-95c9-487b-9443-14bdce713179] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0514 09:43:03.518] has:map has no entry for key "missing"
I0514 09:43:04.542] Successful
I0514 09:43:04.542] message:NAME        READY   STATUS    RESTARTS   AGE
I0514 09:43:04.542] valid-pod   0/1     Pending   0          1s
I0514 09:43:04.542] STATUS      REASON          MESSAGE
I0514 09:43:04.543] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0514 09:43:04.543] has:STATUS
I0514 09:43:04.545] Successful
I0514 09:43:04.546] message:NAME        READY   STATUS    RESTARTS   AGE
I0514 09:43:04.546] valid-pod   0/1     Pending   0          1s
I0514 09:43:04.546] STATUS      REASON          MESSAGE
I0514 09:43:04.546] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0514 09:43:04.546] has:valid-pod
I0514 09:43:05.660] Successful
I0514 09:43:05.660] message:pod/valid-pod
I0514 09:43:05.660] has not:STATUS
I0514 09:43:05.664] Successful
I0514 09:43:05.664] message:pod/valid-pod
... skipping 142 lines ...
I0514 09:43:06.789]   terminationGracePeriodSeconds: 30
I0514 09:43:06.789] status:
I0514 09:43:06.789]   phase: Pending
I0514 09:43:06.789]   qosClass: Guaranteed
I0514 09:43:06.789] has:name: valid-pod
I0514 09:43:06.892] Successful
I0514 09:43:06.893] message:Error from server (NotFound): pods "invalid-pod" not found
I0514 09:43:06.893] has:"invalid-pod" not found
I0514 09:43:06.980] pod "valid-pod" deleted
I0514 09:43:07.101] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:43:07.320] (Bpod/redis-master created
I0514 09:43:07.324] pod/valid-pod created
I0514 09:43:07.459] Successful
... skipping 283 lines ...
I0514 09:43:14.266] +++ command: run_kubectl_exec_pod_tests
I0514 09:43:14.287] +++ [0514 09:43:14] Creating namespace namespace-1557826994-22598
I0514 09:43:14.380] namespace/namespace-1557826994-22598 created
I0514 09:43:14.474] Context "test" modified.
I0514 09:43:14.485] +++ [0514 09:43:14] Testing kubectl exec POD COMMAND
I0514 09:43:14.580] Successful
I0514 09:43:14.580] message:Error from server (NotFound): pods "abc" not found
I0514 09:43:14.580] has:pods "abc" not found
I0514 09:43:14.809] pod/test-pod created
I0514 09:43:14.941] Successful
I0514 09:43:14.941] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0514 09:43:14.941] has not:pods "test-pod" not found
I0514 09:43:14.944] Successful
I0514 09:43:14.944] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0514 09:43:14.944] has not:pod or type/name must be specified
I0514 09:43:15.040] pod "test-pod" deleted
I0514 09:43:15.071] +++ exit code: 0
I0514 09:43:15.133] Recording: run_kubectl_exec_resource_name_tests
I0514 09:43:15.134] Running command: run_kubectl_exec_resource_name_tests
I0514 09:43:15.166] 
... skipping 2 lines ...
I0514 09:43:15.178] +++ command: run_kubectl_exec_resource_name_tests
I0514 09:43:15.195] +++ [0514 09:43:15] Creating namespace namespace-1557826995-1571
I0514 09:43:15.281] namespace/namespace-1557826995-1571 created
I0514 09:43:15.377] Context "test" modified.
I0514 09:43:15.388] +++ [0514 09:43:15] Testing kubectl exec TYPE/NAME COMMAND
I0514 09:43:15.514] Successful
I0514 09:43:15.515] message:error: the server doesn't have a resource type "foo"
I0514 09:43:15.515] has:error:
I0514 09:43:15.625] Successful
I0514 09:43:15.626] message:Error from server (NotFound): deployments.extensions "bar" not found
I0514 09:43:15.626] has:"bar" not found
I0514 09:43:15.858] pod/test-pod created
I0514 09:43:16.102] replicaset.apps/frontend created
W0514 09:43:16.202] I0514 09:43:16.107920   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557826995-1571", Name:"frontend", UID:"f68c085f-a09c-4b2f-a0eb-b17b652bb87a", APIVersion:"apps/v1", ResourceVersion:"835", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-stbhc
W0514 09:43:16.203] I0514 09:43:16.112772   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557826995-1571", Name:"frontend", UID:"f68c085f-a09c-4b2f-a0eb-b17b652bb87a", APIVersion:"apps/v1", ResourceVersion:"835", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-h96l8
W0514 09:43:16.203] I0514 09:43:16.113308   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557826995-1571", Name:"frontend", UID:"f68c085f-a09c-4b2f-a0eb-b17b652bb87a", APIVersion:"apps/v1", ResourceVersion:"835", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jcdl5
I0514 09:43:16.359] configmap/test-set-env-config created
I0514 09:43:16.487] Successful
I0514 09:43:16.488] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0514 09:43:16.488] has:not implemented
I0514 09:43:16.610] Successful
I0514 09:43:16.610] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0514 09:43:16.611] has not:not found
I0514 09:43:16.612] Successful
I0514 09:43:16.612] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0514 09:43:16.612] has not:pod or type/name must be specified
I0514 09:43:16.745] Successful
I0514 09:43:16.745] message:Error from server (BadRequest): pod frontend-h96l8 does not have a host assigned
I0514 09:43:16.746] has not:not found
I0514 09:43:16.748] Successful
I0514 09:43:16.748] message:Error from server (BadRequest): pod frontend-h96l8 does not have a host assigned
I0514 09:43:16.749] has not:pod or type/name must be specified
I0514 09:43:16.839] pod "test-pod" deleted
I0514 09:43:16.944] replicaset.extensions "frontend" deleted
I0514 09:43:17.046] configmap "test-set-env-config" deleted
I0514 09:43:17.079] +++ exit code: 0
I0514 09:43:17.160] Recording: run_create_secret_tests
I0514 09:43:17.160] Running command: run_create_secret_tests
I0514 09:43:17.196] 
I0514 09:43:17.199] +++ Running case: test-cmd.run_create_secret_tests 
I0514 09:43:17.202] +++ working dir: /go/src/k8s.io/kubernetes
I0514 09:43:17.206] +++ command: run_create_secret_tests
I0514 09:43:17.319] Successful
I0514 09:43:17.319] message:Error from server (NotFound): secrets "mysecret" not found
I0514 09:43:17.319] has:secrets "mysecret" not found
I0514 09:43:17.511] Successful
I0514 09:43:17.511] message:Error from server (NotFound): secrets "mysecret" not found
I0514 09:43:17.511] has:secrets "mysecret" not found
I0514 09:43:17.514] Successful
I0514 09:43:17.515] message:user-specified
I0514 09:43:17.515] has:user-specified
I0514 09:43:17.612] Successful
I0514 09:43:17.710] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"2ca886c5-af04-4393-bc83-9e43865b575b","resourceVersion":"856","creationTimestamp":"2019-05-14T09:43:17Z"}}
... skipping 164 lines ...
I0514 09:43:21.430] valid-pod   0/1     Pending   0          0s
I0514 09:43:21.430] has:valid-pod
I0514 09:43:22.534] Successful
I0514 09:43:22.535] message:NAME        READY   STATUS    RESTARTS   AGE
I0514 09:43:22.535] valid-pod   0/1     Pending   0          0s
I0514 09:43:22.535] STATUS      REASON          MESSAGE
I0514 09:43:22.535] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0514 09:43:22.535] has:Timeout exceeded while reading body
I0514 09:43:22.627] Successful
I0514 09:43:22.628] message:NAME        READY   STATUS    RESTARTS   AGE
I0514 09:43:22.628] valid-pod   0/1     Pending   0          1s
I0514 09:43:22.628] has:valid-pod
I0514 09:43:22.714] Successful
I0514 09:43:22.715] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0514 09:43:22.715] has:Invalid timeout value
I0514 09:43:22.815] pod "valid-pod" deleted
I0514 09:43:22.846] +++ exit code: 0
I0514 09:43:22.905] Recording: run_crd_tests
I0514 09:43:22.906] Running command: run_crd_tests
I0514 09:43:22.941] 
... skipping 243 lines ...
I0514 09:43:28.791] foo.company.com/test patched
I0514 09:43:28.900] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0514 09:43:28.992] (Bfoo.company.com/test patched
I0514 09:43:29.106] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0514 09:43:29.207] (Bfoo.company.com/test patched
I0514 09:43:29.327] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0514 09:43:29.528] (B+++ [0514 09:43:29] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0514 09:43:29.609] {
I0514 09:43:29.610]     "apiVersion": "company.com/v1",
I0514 09:43:29.610]     "kind": "Foo",
I0514 09:43:29.610]     "metadata": {
I0514 09:43:29.610]         "annotations": {
I0514 09:43:29.610]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 327 lines ...
W0514 09:43:46.097] I0514 09:43:46.096039   47231 client.go:354] scheme "" not registered, fallback to default scheme
W0514 09:43:46.097] I0514 09:43:46.096083   47231 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0514 09:43:46.097] I0514 09:43:46.096225   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:46.097] I0514 09:43:46.096692   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0514 09:43:46.237] crd.sh:459: Successful get bars {{len .items}}: 0
I0514 09:43:46.445] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0514 09:43:46.546] Error from server (NotFound): namespaces "non-native-resources" not found
I0514 09:43:46.647] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0514 09:43:46.712] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0514 09:43:46.837] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0514 09:43:46.886] +++ exit code: 0
I0514 09:43:46.997] Recording: run_cmd_with_img_tests
I0514 09:43:46.998] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0514 09:43:47.355] I0514 09:43:47.354323   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557827027-8617", Name:"test1-7b9c75bcb9", UID:"abb04c5a-509c-4ee5-8c7a-c7202b818192", APIVersion:"apps/v1", ResourceVersion:"1010", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-7b9c75bcb9-87qwv
I0514 09:43:47.455] Successful
I0514 09:43:47.455] message:deployment.apps/test1 created
I0514 09:43:47.456] has:deployment.apps/test1 created
I0514 09:43:47.456] deployment.extensions "test1" deleted
I0514 09:43:47.555] Successful
I0514 09:43:47.556] message:error: Invalid image name "InvalidImageName": invalid reference format
I0514 09:43:47.556] has:error: Invalid image name "InvalidImageName": invalid reference format
I0514 09:43:47.581] +++ exit code: 0
I0514 09:43:47.645] +++ [0514 09:43:47] Testing recursive resources
I0514 09:43:47.654] +++ [0514 09:43:47] Creating namespace namespace-1557827027-31342
I0514 09:43:47.739] namespace/namespace-1557827027-31342 created
I0514 09:43:47.836] Context "test" modified.
I0514 09:43:47.959] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:43:48.332] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0514 09:43:48.335] (BSuccessful
I0514 09:43:48.335] message:pod/busybox0 created
I0514 09:43:48.336] pod/busybox1 created
I0514 09:43:48.336] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0514 09:43:48.336] has:error validating data: kind not set
I0514 09:43:48.450] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0514 09:43:48.668] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0514 09:43:48.671] (BSuccessful
I0514 09:43:48.672] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0514 09:43:48.672] has:Object 'Kind' is missing
I0514 09:43:48.785] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0514 09:43:49.170] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0514 09:43:49.173] (BSuccessful
I0514 09:43:49.173] message:pod/busybox0 replaced
I0514 09:43:49.173] pod/busybox1 replaced
I0514 09:43:49.173] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0514 09:43:49.173] has:error validating data: kind not set
I0514 09:43:49.289] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0514 09:43:49.404] (BSuccessful
I0514 09:43:49.405] message:Name:         busybox0
I0514 09:43:49.405] Namespace:    namespace-1557827027-31342
I0514 09:43:49.405] Priority:     0
I0514 09:43:49.405] Node:         <none>
... skipping 153 lines ...
I0514 09:43:49.418] has:Object 'Kind' is missing
I0514 09:43:49.531] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0514 09:43:49.763] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0514 09:43:49.764] (BSuccessful
I0514 09:43:49.764] message:pod/busybox0 annotated
I0514 09:43:49.764] pod/busybox1 annotated
I0514 09:43:49.764] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0514 09:43:49.764] has:Object 'Kind' is missing
I0514 09:43:49.882] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0514 09:43:50.292] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0514 09:43:50.295] (BSuccessful
I0514 09:43:50.295] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0514 09:43:50.295] pod/busybox0 configured
I0514 09:43:50.295] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0514 09:43:50.296] pod/busybox1 configured
I0514 09:43:50.296] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0514 09:43:50.296] has:error validating data: kind not set
I0514 09:43:50.417] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0514 09:43:50.653] (Bdeployment.apps/nginx created
W0514 09:43:50.754] I0514 09:43:50.660601   50569 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557827027-31342", Name:"nginx", UID:"aae16c09-1df7-49dd-b8ca-104a24574468", APIVersion:"apps/v1", ResourceVersion:"1035", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-958dc566b to 3
W0514 09:43:50.754] I0514 09:43:50.665645   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557827027-31342", Name:"nginx-958dc566b", UID:"74eecc9b-b2e8-4da0-a29d-9cf8184a7a48", APIVersion:"apps/v1", ResourceVersion:"1036", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-8mr2t
W0514 09:43:50.755] I0514 09:43:50.669758   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557827027-31342", Name:"nginx-958dc566b", UID:"74eecc9b-b2e8-4da0-a29d-9cf8184a7a48", APIVersion:"apps/v1", ResourceVersion:"1036", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-bpm2z
W0514 09:43:50.755] I0514 09:43:50.670357   50569 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557827027-31342", Name:"nginx-958dc566b", UID:"74eecc9b-b2e8-4da0-a29d-9cf8184a7a48", APIVersion:"apps/v1", ResourceVersion:"1036", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-ssdxq
... skipping 63 lines ...
W0514 09:43:51.430] I0514 09:43:51.428688   47231 naming_controller.go:299] Shutting down NamingConditionController
W0514 09:43:51.431] I0514 09:43:51.428729   47231 crd_finalizer.go:262] Shutting down CRDFinalizer
W0514 09:43:51.431] I0514 09:43:51.428757   47231 customresource_discovery_controller.go:219] Shutting down DiscoveryController
W0514 09:43:51.431] I0514 09:43:51.428860   47231 secure_serving.go:160] Stopped listening on 127.0.0.1:6443
W0514 09:43:51.432] I0514 09:43:51.431670   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.436] I0514 09:43:51.435892   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.437] W0514 09:43:51.435904   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.437] I0514 09:43:51.432823   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.437] I0514 09:43:51.435947   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.437] I0514 09:43:51.432861   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.437] I0514 09:43:51.435966   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.438] I0514 09:43:51.432865   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.438] I0514 09:43:51.435986   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 118 lines ...
W0514 09:43:51.458] I0514 09:43:51.437623   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.459] I0514 09:43:51.434316   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.459] I0514 09:43:51.437638   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.459] I0514 09:43:51.437644   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.459] I0514 09:43:51.434301   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.459] I0514 09:43:51.437658   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.460] W0514 09:43:51.434364   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.460] W0514 09:43:51.434364   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.460] W0514 09:43:51.434383   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.461] W0514 09:43:51.434396   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.461] W0514 09:43:51.434417   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.461] W0514 09:43:51.434420   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.461] W0514 09:43:51.434418   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.462] W0514 09:43:51.434434   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.462] W0514 09:43:51.434449   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.462] W0514 09:43:51.434447   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.463] W0514 09:43:51.434460   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.463] W0514 09:43:51.434467   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.463] W0514 09:43:51.434472   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.463] W0514 09:43:51.434475   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.464] W0514 09:43:51.434482   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.464] W0514 09:43:51.434500   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.464] W0514 09:43:51.434501   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.464] W0514 09:43:51.434501   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.465] W0514 09:43:51.434509   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.465] W0514 09:43:51.434513   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.465] W0514 09:43:51.434526   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.466] W0514 09:43:51.434552   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.466] W0514 09:43:51.434561   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.466] W0514 09:43:51.434562   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.466] W0514 09:43:51.434581   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.467] W0514 09:43:51.434587   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.467] W0514 09:43:51.434599   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.467] W0514 09:43:51.434609   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.468] W0514 09:43:51.434621   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.468] W0514 09:43:51.434616   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.468] W0514 09:43:51.434641   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.468] W0514 09:43:51.434646   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.469] W0514 09:43:51.434654   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.469] W0514 09:43:51.434658   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.469] W0514 09:43:51.434680   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.469] W0514 09:43:51.434687   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.470] W0514 09:43:51.434688   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.470] W0514 09:43:51.434700   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.470] W0514 09:43:51.434718   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.471] W0514 09:43:51.434720   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.471] W0514 09:43:51.434735   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.471] W0514 09:43:51.434739   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.471] W0514 09:43:51.434749   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.472] W0514 09:43:51.434768   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.472] W0514 09:43:51.434766   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.472] W0514 09:43:51.434780   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.472] W0514 09:43:51.434799   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.473] W0514 09:43:51.434810   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.473] W0514 09:43:51.434816   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.473] W0514 09:43:51.434816   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.473] W0514 09:43:51.434835   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.474] W0514 09:43:51.434845   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.474] W0514 09:43:51.434849   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.474] W0514 09:43:51.434858   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.474] W0514 09:43:51.434875   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.475] W0514 09:43:51.434875   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.475] W0514 09:43:51.434885   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.475] W0514 09:43:51.434896   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.475] W0514 09:43:51.434911   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.476] W0514 09:43:51.434920   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.476] W0514 09:43:51.434926   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.476] W0514 09:43:51.434936   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.476] W0514 09:43:51.434955   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.477] W0514 09:43:51.434961   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.477] W0514 09:43:51.434966   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.477] W0514 09:43:51.434977   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.477] W0514 09:43:51.434990   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.478] W0514 09:43:51.435002   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.478] W0514 09:43:51.434997   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.478] W0514 09:43:51.435010   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.478] W0514 09:43:51.435034   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.479] W0514 09:43:51.435051   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.479] W0514 09:43:51.435051   47231 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0514 09:43:51.479] E0514 09:43:51.435099   47231 controller.go:179] Get https://127.0.0.1:6443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:6443: connect: connection refused
W0514 09:43:51.479] I0514 09:43:51.432801   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0514 09:43:51.479] I0514 09:43:51.436269   47231 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0514 09:43:51.580] Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: busybox0:busybox1:, got: busybox0:busybox1:nginx-958dc566b-8mr2t:nginx-958dc566b-bpm2z:nginx-958dc566b-ssdxq:
I0514 09:43:51.581] 
I0514 09:43:51.581] generic-resources.sh:280: FAIL!
I0514 09:43:51.581] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I0514 09:43:51.581]   Expected: busybox0:busybox1:
I0514 09:43:51.581]   Got:      busybox0:busybox1:nginx-958dc566b-8mr2t:nginx-958dc566b-bpm2z:nginx-958dc566b-ssdxq:
I0514 09:43:51.581] (B
I0514 09:43:51.581] 51 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I0514 09:43:51.581] (B
I0514 09:43:51.640] junit report dir: /workspace/artifacts
I0514 09:43:51.643] +++ [0514 09:43:51] Clean up complete
I0514 09:43:51.648] Makefile:329: recipe for target 'test-cmd' failed
W0514 09:43:51.749] make: *** [test-cmd] Error 1
W0514 09:43:56.528] Traceback (most recent call last):
W0514 09:43:56.528]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0514 09:43:56.528]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0514 09:43:56.528]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0514 09:43:56.528]     check(*cmd)
W0514 09:43:56.528]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0514 09:43:56.528]     subprocess.check_call(cmd)
W0514 09:43:56.529]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0514 09:43:56.529]     raise CalledProcessError(retcode, cmd)
W0514 09:43:56.529] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.14-v20190318-2ac98e338', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0514 09:43:56.539] Command failed
I0514 09:43:56.539] process 494 exited with code 1 after 11.9m
E0514 09:43:56.539] FAIL: ci-kubernetes-integration-master
I0514 09:43:56.540] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0514 09:43:57.286] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0514 09:43:57.363] process 67872 exited with code 0 after 0.0m
I0514 09:43:57.363] Call:  gcloud config get-value account
I0514 09:43:57.755] process 67884 exited with code 0 after 0.0m
I0514 09:43:57.755] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0514 09:43:57.756] Upload result and artifacts...
I0514 09:43:57.756] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/1128231201497157634
I0514 09:43:57.756] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/1128231201497157634/artifacts
W0514 09:43:59.117] CommandException: One or more URLs matched no objects.
E0514 09:43:59.295] Command failed
I0514 09:43:59.295] process 67896 exited with code 1 after 0.0m
W0514 09:43:59.295] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/1128231201497157634/artifacts not exist yet
I0514 09:43:59.295] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/1128231201497157634/artifacts
I0514 09:44:01.701] process 68038 exited with code 0 after 0.0m
W0514 09:44:01.701] metadata path /workspace/_artifacts/metadata.json does not exist
W0514 09:44:01.701] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...