This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 89 succeeded
Started2019-05-13 23:21
Elapsed16m18s
Revision
Buildergke-prow-containerd-pool-99179761-dtm3
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/2e55dfcc-86dd-413e-ad4f-5d33785e044f/targets/test'}}
poda9623463-75d5-11e9-b740-0a580a6c086f
resultstorehttps://source.cloud.google.com/results/invocations/2e55dfcc-86dd-413e-ad4f-5d33785e044f/targets/test
infra-commitabfb1ad07
poda9623463-75d5-11e9-b740-0a580a6c086f
repok8s.io/kubernetes
repo-commitd881c0d77bef9c268433c6ec0770fcc80ab79c59
repos{u'k8s.io/kubernetes': u'master'}

No Test Failures!


Show 89 Passed Tests

Error lines from build-log.txt

... skipping 306 lines ...
W0513 23:32:37.576] I0513 23:32:37.576023   47575 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0513 23:32:37.577] I0513 23:32:37.576115   47575 server.go:558] external host was not specified, using 172.17.0.2
W0513 23:32:37.577] W0513 23:32:37.576127   47575 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0513 23:32:37.578] I0513 23:32:37.576620   47575 server.go:145] Version: v1.15.0-alpha.3.289+d881c0d77bef9c
W0513 23:32:37.894] I0513 23:32:37.893537   47575 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0513 23:32:37.895] I0513 23:32:37.893590   47575 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0513 23:32:37.895] E0513 23:32:37.894433   47575 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:37.896] E0513 23:32:37.894495   47575 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:37.896] E0513 23:32:37.894542   47575 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:37.896] E0513 23:32:37.894585   47575 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:37.897] E0513 23:32:37.894633   47575 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:37.897] E0513 23:32:37.894679   47575 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:37.897] E0513 23:32:37.894713   47575 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:37.898] E0513 23:32:37.894738   47575 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:37.898] E0513 23:32:37.894860   47575 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:37.899] E0513 23:32:37.894953   47575 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:37.899] E0513 23:32:37.894996   47575 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:37.899] E0513 23:32:37.895026   47575 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:37.900] I0513 23:32:37.895767   47575 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0513 23:32:37.900] I0513 23:32:37.895793   47575 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0513 23:32:37.900] I0513 23:32:37.898783   47575 client.go:354] parsed scheme: ""
W0513 23:32:37.901] I0513 23:32:37.898809   47575 client.go:354] scheme "" not registered, fallback to default scheme
W0513 23:32:37.901] I0513 23:32:37.898875   47575 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0513 23:32:37.901] I0513 23:32:37.899009   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 361 lines ...
W0513 23:32:38.710] W0513 23:32:38.709544   47575 genericapiserver.go:347] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0513 23:32:38.891] I0513 23:32:38.891251   47575 client.go:354] parsed scheme: ""
W0513 23:32:38.892] I0513 23:32:38.891296   47575 client.go:354] scheme "" not registered, fallback to default scheme
W0513 23:32:38.893] I0513 23:32:38.891353   47575 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0513 23:32:38.893] I0513 23:32:38.891412   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:32:38.893] I0513 23:32:38.891993   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:32:39.996] E0513 23:32:39.995952   47575 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:40.026] E0513 23:32:39.996030   47575 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:40.026] E0513 23:32:39.996068   47575 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:40.026] E0513 23:32:39.996100   47575 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:40.027] E0513 23:32:39.996142   47575 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:40.027] E0513 23:32:39.996195   47575 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:40.027] E0513 23:32:39.996220   47575 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:40.027] E0513 23:32:39.996243   47575 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:40.027] E0513 23:32:39.996306   47575 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:40.028] E0513 23:32:39.996358   47575 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:40.028] E0513 23:32:39.996392   47575 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:40.028] E0513 23:32:39.996419   47575 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 23:32:40.028] I0513 23:32:39.996460   47575 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0513 23:32:40.029] I0513 23:32:39.996469   47575 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0513 23:32:40.029] I0513 23:32:39.998676   47575 client.go:354] parsed scheme: ""
W0513 23:32:40.029] I0513 23:32:39.998707   47575 client.go:354] scheme "" not registered, fallback to default scheme
W0513 23:32:40.029] I0513 23:32:39.998764   47575 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0513 23:32:40.029] I0513 23:32:39.998844   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 105 lines ...
W0513 23:33:36.558] I0513 23:33:36.556988   50903 controllermanager.go:523] Started "garbagecollector"
W0513 23:33:36.558] W0513 23:33:36.557002   50903 controllermanager.go:502] "bootstrapsigner" is disabled
W0513 23:33:36.558] I0513 23:33:36.558617   50903 controllermanager.go:523] Started "persistentvolume-expander"
W0513 23:33:36.559] I0513 23:33:36.558728   50903 expand_controller.go:153] Starting expand controller
W0513 23:33:36.559] I0513 23:33:36.558750   50903 graph_builder.go:307] GraphBuilder running
W0513 23:33:36.559] I0513 23:33:36.558765   50903 controller_utils.go:1029] Waiting for caches to sync for expand controller
W0513 23:33:36.559] E0513 23:33:36.559346   50903 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0513 23:33:36.560] W0513 23:33:36.559369   50903 controllermanager.go:515] Skipping "service"
W0513 23:33:36.560] I0513 23:33:36.559380   50903 core.go:170] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0513 23:33:36.560] W0513 23:33:36.559387   50903 controllermanager.go:515] Skipping "route"
W0513 23:33:36.560] I0513 23:33:36.560079   50903 controllermanager.go:523] Started "endpoint"
W0513 23:33:36.561] I0513 23:33:36.560178   50903 endpoints_controller.go:166] Starting endpoint controller
W0513 23:33:36.561] I0513 23:33:36.560250   50903 controller_utils.go:1029] Waiting for caches to sync for endpoint controller
... skipping 52 lines ...
W0513 23:33:36.792] I0513 23:33:36.791813   50903 attach_detach_controller.go:335] Starting attach detach controller
W0513 23:33:36.792] I0513 23:33:36.791844   50903 controller_utils.go:1029] Waiting for caches to sync for attach detach controller
W0513 23:33:36.793] I0513 23:33:36.793629   50903 controllermanager.go:523] Started "job"
W0513 23:33:36.794] I0513 23:33:36.793664   50903 job_controller.go:143] Starting job controller
W0513 23:33:36.794] I0513 23:33:36.793694   50903 controller_utils.go:1029] Waiting for caches to sync for job controller
W0513 23:33:36.794] I0513 23:33:36.793969   50903 node_lifecycle_controller.go:77] Sending events to api server
W0513 23:33:36.794] E0513 23:33:36.794019   50903 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
W0513 23:33:36.794] W0513 23:33:36.794033   50903 controllermanager.go:515] Skipping "cloud-node-lifecycle"
W0513 23:33:36.818] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0513 23:33:36.852] I0513 23:33:36.852088   50903 controller_utils.go:1036] Caches are synced for PVC protection controller
W0513 23:33:36.853] I0513 23:33:36.852423   50903 controller_utils.go:1036] Caches are synced for service account controller
W0513 23:33:36.853] I0513 23:33:36.852724   50903 controller_utils.go:1036] Caches are synced for PV protection controller
W0513 23:33:36.853] I0513 23:33:36.852804   50903 controller_utils.go:1036] Caches are synced for GC controller
W0513 23:33:36.858] I0513 23:33:36.858255   47575 controller.go:606] quota admission added evaluator for: serviceaccounts
W0513 23:33:36.860] I0513 23:33:36.859087   50903 controller_utils.go:1036] Caches are synced for expand controller
W0513 23:33:36.863] I0513 23:33:36.863238   50903 controller_utils.go:1036] Caches are synced for ReplicationController controller
W0513 23:33:36.869] W0513 23:33:36.868755   50903 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0513 23:33:36.874] I0513 23:33:36.874404   50903 controller_utils.go:1036] Caches are synced for certificate controller
W0513 23:33:36.883] I0513 23:33:36.883267   50903 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0513 23:33:36.888] I0513 23:33:36.887944   50903 controller_utils.go:1036] Caches are synced for namespace controller
W0513 23:33:36.891] I0513 23:33:36.890928   50903 controller_utils.go:1036] Caches are synced for persistent volume controller
W0513 23:33:36.892] I0513 23:33:36.892004   50903 controller_utils.go:1036] Caches are synced for attach detach controller
W0513 23:33:36.897] E0513 23:33:36.896857   50903 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0513 23:33:36.897] E0513 23:33:36.897341   50903 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0513 23:33:36.912] E0513 23:33:36.912063   50903 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0513 23:33:36.943] I0513 23:33:36.943124   50903 controller_utils.go:1036] Caches are synced for TTL controller
W0513 23:33:36.951] I0513 23:33:36.950975   50903 controller_utils.go:1036] Caches are synced for ReplicaSet controller
W0513 23:33:36.990] I0513 23:33:36.989931   50903 controller_utils.go:1036] Caches are synced for deployment controller
W0513 23:33:37.039] I0513 23:33:37.038808   50903 controller_utils.go:1036] Caches are synced for disruption controller
W0513 23:33:37.039] I0513 23:33:37.038851   50903 disruption.go:294] Sending events to api server.
I0513 23:33:37.140] NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
... skipping 94 lines ...
I0513 23:33:41.310] +++ command: run_RESTMapper_evaluation_tests
I0513 23:33:41.329] +++ [0513 23:33:41] Creating namespace namespace-1557790421-13242
W0513 23:33:41.429] /go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 148: 51567 Terminated              kubectl proxy --port=0 --www=. --api-prefix="$1" > ${PROXY_PORT_FILE} 2>&1
I0513 23:33:41.530] namespace/namespace-1557790421-13242 created
I0513 23:33:41.540] Context "test" modified.
I0513 23:33:41.553] +++ [0513 23:33:41] Testing RESTMapper
I0513 23:33:41.739] +++ [0513 23:33:41] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0513 23:33:41.759] +++ exit code: 0
I0513 23:33:41.942] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0513 23:33:41.943] bindings                                                                      true         Binding
I0513 23:33:41.943] componentstatuses                 cs                                          false        ComponentStatus
I0513 23:33:41.943] configmaps                        cm                                          true         ConfigMap
I0513 23:33:41.943] endpoints                         ep                                          true         Endpoints
... skipping 640 lines ...
I0513 23:34:11.618] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 23:34:11.858] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 23:34:12.001] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 23:34:12.283] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 23:34:12.442] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 23:34:12.585] (Bpod "valid-pod" force deleted
W0513 23:34:12.687] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0513 23:34:12.799] error: setting 'all' parameter but found a non empty selector. 
W0513 23:34:12.799] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0513 23:34:12.900] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0513 23:34:12.957] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0513 23:34:13.081] (Bnamespace/test-kubectl-describe-pod created
I0513 23:34:13.232] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0513 23:34:13.374] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0513 23:34:14.975] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0513 23:34:15.112] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0513 23:34:15.247] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0513 23:34:15.399] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0513 23:34:15.620] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 23:34:15.920] (Bpod/env-test-pod created
W0513 23:34:16.020] error: min-available and max-unavailable cannot be both specified
I0513 23:34:16.320] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0513 23:34:16.321] Name:         env-test-pod
I0513 23:34:16.322] Namespace:    test-kubectl-describe-pod
I0513 23:34:16.322] Priority:     0
I0513 23:34:16.322] Node:         <none>
I0513 23:34:16.323] Labels:       <none>
... skipping 143 lines ...
I0513 23:34:32.963] (Bservice "modified" deleted
I0513 23:34:33.121] replicationcontroller "modified" deleted
I0513 23:34:33.609] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 23:34:33.896] (Bpod/valid-pod created
I0513 23:34:34.059] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 23:34:34.317] (BSuccessful
I0513 23:34:34.318] message:Error from server: cannot restore map from string
I0513 23:34:34.318] has:cannot restore map from string
W0513 23:34:34.419] E0513 23:34:34.303437   47575 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0513 23:34:34.519] Successful
I0513 23:34:34.520] message:pod/valid-pod patched (no change)
I0513 23:34:34.520] has:patched (no change)
I0513 23:34:34.614] pod/valid-pod patched
I0513 23:34:34.761] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0513 23:34:35.001] (Bcore.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
... skipping 4 lines ...
I0513 23:34:35.631] (Bpod/valid-pod patched
I0513 23:34:35.782] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0513 23:34:35.914] (Bpod/valid-pod patched
I0513 23:34:36.184] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0513 23:34:36.585] (Bpod/valid-pod patched
I0513 23:34:36.812] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0513 23:34:37.736] (B+++ [0513 23:34:37] "kubectl patch with resourceVersion 513" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0513 23:34:38.730] pod "valid-pod" deleted
I0513 23:34:38.747] pod/valid-pod replaced
W0513 23:34:39.406] I0513 23:34:39.405711   47575 trace.go:81] Trace[1212868560]: "Get /api/v1/namespaces/namespace-1557790464-7139/pods/valid-pod" (started: 2019-05-13 23:34:38.867423518 +0000 UTC m=+122.567959111) (total time: 538.239995ms):
W0513 23:34:39.407] Trace[1212868560]: [537.808681ms] [537.794227ms] About to write a response
I0513 23:34:39.508] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0513 23:34:40.351] (BSuccessful
I0513 23:34:40.352] message:error: --grace-period must have --force specified
I0513 23:34:40.352] has:\-\-grace-period must have \-\-force specified
I0513 23:34:40.358] Successful
I0513 23:34:40.359] message:error: --timeout must have --force specified
I0513 23:34:40.360] has:\-\-timeout must have \-\-force specified
I0513 23:34:40.360] node/node-v1-test created
W0513 23:34:40.461] W0513 23:34:40.075060   50903 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
W0513 23:34:41.327] I0513 23:34:41.326599   47575 trace.go:81] Trace[1473820437]: "GuaranteedUpdate etcd3: *core.Node" (started: 2019-05-13 23:34:40.58660851 +0000 UTC m=+124.287144124) (total time: 739.922879ms):
W0513 23:34:41.327] Trace[1473820437]: [739.704816ms] [738.602052ms] Transaction committed
W0513 23:34:41.328] I0513 23:34:41.326919   47575 trace.go:81] Trace[1258840278]: "Update /api/v1/nodes/node-v1-test" (started: 2019-05-13 23:34:40.586376595 +0000 UTC m=+124.286912198) (total time: 740.509728ms):
W0513 23:34:41.328] Trace[1258840278]: [740.359233ms] [740.205271ms] Object stored in database
W0513 23:34:41.328] I0513 23:34:41.326654   47575 trace.go:81] Trace[473584596]: "GuaranteedUpdate etcd3: *core.Node" (started: 2019-05-13 23:34:40.591133964 +0000 UTC m=+124.291669579) (total time: 735.479853ms):
W0513 23:34:41.328] Trace[473584596]: [735.311105ms] [734.136722ms] Transaction committed
... skipping 28 lines ...
I0513 23:34:44.941]     name: kubernetes-pause
I0513 23:34:44.941] has:localonlyvalue
I0513 23:34:45.017] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0513 23:34:45.275] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0513 23:34:45.421] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0513 23:34:45.550] (Bpod/valid-pod labeled
W0513 23:34:45.651] error: 'name' already has a value (valid-pod), and --overwrite is false
I0513 23:34:46.146] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0513 23:34:46.216] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 23:34:46.350] (Bpod "valid-pod" force deleted
W0513 23:34:46.452] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0513 23:34:46.553] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 23:34:46.554] (B+++ [0513 23:34:46] Creating namespace namespace-1557790486-27205
... skipping 82 lines ...
I0513 23:34:57.364] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0513 23:34:57.368] +++ working dir: /go/src/k8s.io/kubernetes
I0513 23:34:57.370] +++ command: run_kubectl_create_error_tests
I0513 23:34:57.386] +++ [0513 23:34:57] Creating namespace namespace-1557790497-2763
I0513 23:34:57.482] namespace/namespace-1557790497-2763 created
I0513 23:34:57.567] Context "test" modified.
I0513 23:34:57.574] +++ [0513 23:34:57] Testing kubectl create with error
W0513 23:34:57.675] Error: must specify one of -f and -k
W0513 23:34:57.675] 
W0513 23:34:57.675] Create a resource from a file or from stdin.
W0513 23:34:57.676] 
W0513 23:34:57.676]  JSON and YAML formats are accepted.
W0513 23:34:57.676] 
W0513 23:34:57.676] Examples:
... skipping 41 lines ...
W0513 23:34:57.683] 
W0513 23:34:57.683] Usage:
W0513 23:34:57.683]   kubectl create -f FILENAME [options]
W0513 23:34:57.683] 
W0513 23:34:57.683] Use "kubectl <command> --help" for more information about a given command.
W0513 23:34:57.683] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0513 23:34:57.920] +++ [0513 23:34:57] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0513 23:34:58.020] kubectl convert is DEPRECATED and will be removed in a future version.
W0513 23:34:58.021] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0513 23:34:58.144] +++ exit code: 0
I0513 23:34:58.182] Recording: run_kubectl_apply_tests
I0513 23:34:58.183] Running command: run_kubectl_apply_tests
I0513 23:34:58.208] 
... skipping 32 lines ...
W0513 23:35:02.720] I0513 23:35:02.720014   47575 client.go:354] scheme "" not registered, fallback to default scheme
W0513 23:35:02.721] I0513 23:35:02.720071   47575 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0513 23:35:02.721] I0513 23:35:02.720133   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:35:02.721] I0513 23:35:02.720688   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:35:02.724] I0513 23:35:02.723830   47575 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0513 23:35:02.824] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0513 23:35:03.146] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0513 23:35:03.280] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0513 23:35:03.328] +++ exit code: 0
I0513 23:35:03.389] Recording: run_kubectl_run_tests
I0513 23:35:03.389] Running command: run_kubectl_run_tests
I0513 23:35:03.415] 
I0513 23:35:03.424] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 95 lines ...
I0513 23:35:07.168] Context "test" modified.
I0513 23:35:07.178] +++ [0513 23:35:07] Testing kubectl create filter
I0513 23:35:07.330] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 23:35:07.621] (Bpod/selector-test-pod created
I0513 23:35:07.765] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0513 23:35:07.885] (BSuccessful
I0513 23:35:07.886] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0513 23:35:07.886] has:pods "selector-test-pod-dont-apply" not found
I0513 23:35:08.005] pod "selector-test-pod" deleted
I0513 23:35:08.030] +++ exit code: 0
I0513 23:35:08.096] Recording: run_kubectl_apply_deployments_tests
I0513 23:35:08.097] Running command: run_kubectl_apply_deployments_tests
I0513 23:35:08.122] 
... skipping 39 lines ...
W0513 23:35:11.684] I0513 23:35:11.596016   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790508-6620", Name:"nginx-8c9ccf86d", UID:"c5411ae8-281d-4546-b9cf-bb208aa00502", APIVersion:"apps/v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-48tnl
W0513 23:35:11.685] I0513 23:35:11.603935   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790508-6620", Name:"nginx-8c9ccf86d", UID:"c5411ae8-281d-4546-b9cf-bb208aa00502", APIVersion:"apps/v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-kzgkv
W0513 23:35:11.685] I0513 23:35:11.605966   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790508-6620", Name:"nginx-8c9ccf86d", UID:"c5411ae8-281d-4546-b9cf-bb208aa00502", APIVersion:"apps/v1", ResourceVersion:"619", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-gmvs4
W0513 23:35:11.685] I0513 23:35:11.646972   50903 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1557790493-30897
I0513 23:35:11.786] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0513 23:35:16.143] (BSuccessful
I0513 23:35:16.146] message:Error from server (Conflict): error when applying patch:
I0513 23:35:16.147] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557790508-6620\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0513 23:35:16.147] to:
I0513 23:35:16.147] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0513 23:35:16.148] Name: "nginx", Namespace: "namespace-1557790508-6620"
I0513 23:35:16.151] Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557790508-6620\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-05-13T23:35:11Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-05-13T23:35:11Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-05-13T23:35:11Z"]] "name":"nginx" "namespace":"namespace-1557790508-6620" "resourceVersion":"632" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1557790508-6620/deployments/nginx" "uid":"e29aad15-71bd-42d1-96c7-97b797271abe"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-05-13T23:35:11Z" "lastUpdateTime":"2019-05-13T23:35:11Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0513 23:35:16.151] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0513 23:35:16.151] has:Error from server (Conflict)
W0513 23:35:20.832] E0513 23:35:20.831685   50903 replica_set.go:450] Sync "namespace-1557790508-6620/nginx-8c9ccf86d" failed with replicasets.apps "nginx-8c9ccf86d" not found
I0513 23:35:21.767] deployment.extensions/nginx configured
W0513 23:35:21.868] I0513 23:35:21.779692   50903 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557790508-6620", Name:"nginx", UID:"bbf237ba-76d9-4892-bc20-19041bb65c52", APIVersion:"apps/v1", ResourceVersion:"655", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-86bb9b4d9f to 3
W0513 23:35:21.869] I0513 23:35:21.796847   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790508-6620", Name:"nginx-86bb9b4d9f", UID:"fab674e5-0e47-4113-92e6-1e2ce6dd8bd4", APIVersion:"apps/v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-dfltn
W0513 23:35:21.870] I0513 23:35:21.806663   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790508-6620", Name:"nginx-86bb9b4d9f", UID:"fab674e5-0e47-4113-92e6-1e2ce6dd8bd4", APIVersion:"apps/v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-nk79k
W0513 23:35:21.871] I0513 23:35:21.806743   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790508-6620", Name:"nginx-86bb9b4d9f", UID:"fab674e5-0e47-4113-92e6-1e2ce6dd8bd4", APIVersion:"apps/v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-7l6ml
I0513 23:35:21.971] Successful
I0513 23:35:21.971] message:        "name": "nginx2"
I0513 23:35:21.972]           "name": "nginx2"
I0513 23:35:21.972] has:"name": "nginx2"
W0513 23:35:27.329] E0513 23:35:27.329175   50903 replica_set.go:450] Sync "namespace-1557790508-6620/nginx-86bb9b4d9f" failed with replicasets.apps "nginx-86bb9b4d9f" not found
W0513 23:35:28.287] I0513 23:35:28.286836   50903 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557790508-6620", Name:"nginx", UID:"0c516e79-ceef-4f48-9e94-88d4b2032c28", APIVersion:"apps/v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-86bb9b4d9f to 3
W0513 23:35:28.292] I0513 23:35:28.292217   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790508-6620", Name:"nginx-86bb9b4d9f", UID:"8c9cab69-21a7-42e0-9ee0-776b65bb28ce", APIVersion:"apps/v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-748w2
W0513 23:35:28.300] I0513 23:35:28.299750   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790508-6620", Name:"nginx-86bb9b4d9f", UID:"8c9cab69-21a7-42e0-9ee0-776b65bb28ce", APIVersion:"apps/v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-562tp
W0513 23:35:28.307] I0513 23:35:28.307190   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790508-6620", Name:"nginx-86bb9b4d9f", UID:"8c9cab69-21a7-42e0-9ee0-776b65bb28ce", APIVersion:"apps/v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-47rgm
I0513 23:35:28.408] Successful
I0513 23:35:28.409] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 159 lines ...
I0513 23:35:31.189] +++ [0513 23:35:31] Creating namespace namespace-1557790531-31010
I0513 23:35:31.272] namespace/namespace-1557790531-31010 created
I0513 23:35:31.371] Context "test" modified.
I0513 23:35:31.379] +++ [0513 23:35:31] Testing kubectl get
I0513 23:35:31.500] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 23:35:31.601] (BSuccessful
I0513 23:35:31.601] message:Error from server (NotFound): pods "abc" not found
I0513 23:35:31.601] has:pods "abc" not found
I0513 23:35:31.721] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 23:35:31.812] (BSuccessful
I0513 23:35:31.812] message:Error from server (NotFound): pods "abc" not found
I0513 23:35:31.812] has:pods "abc" not found
I0513 23:35:31.915] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 23:35:32.014] (BSuccessful
I0513 23:35:32.014] message:{
I0513 23:35:32.014]     "apiVersion": "v1",
I0513 23:35:32.014]     "items": [],
... skipping 23 lines ...
I0513 23:35:32.456] has not:No resources found
I0513 23:35:32.573] Successful
I0513 23:35:32.573] message:NAME
I0513 23:35:32.573] has not:No resources found
I0513 23:35:32.677] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 23:35:32.800] (BSuccessful
I0513 23:35:32.801] message:error: the server doesn't have a resource type "foobar"
I0513 23:35:32.801] has not:No resources found
I0513 23:35:32.915] Successful
I0513 23:35:32.916] message:No resources found.
I0513 23:35:32.916] has:No resources found
I0513 23:35:33.027] Successful
I0513 23:35:33.027] message:
I0513 23:35:33.027] has not:No resources found
I0513 23:35:33.163] Successful
I0513 23:35:33.163] message:No resources found.
I0513 23:35:33.164] has:No resources found
I0513 23:35:33.288] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 23:35:33.403] (BSuccessful
I0513 23:35:33.403] message:Error from server (NotFound): pods "abc" not found
I0513 23:35:33.403] has:pods "abc" not found
I0513 23:35:33.405] FAIL!
I0513 23:35:33.405] message:Error from server (NotFound): pods "abc" not found
I0513 23:35:33.406] has not:List
I0513 23:35:33.406] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0513 23:35:33.603] Successful
I0513 23:35:33.604] message:I0513 23:35:33.517986   61139 loader.go:359] Config loaded from file:  /tmp/tmp.554KnUBj5e/.kube/config
I0513 23:35:33.604] I0513 23:35:33.520850   61139 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 2 milliseconds
I0513 23:35:33.605] I0513 23:35:33.571201   61139 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 888 lines ...
I0513 23:35:39.627] Successful
I0513 23:35:39.627] message:NAME    DATA   AGE
I0513 23:35:39.627] one     0      0s
I0513 23:35:39.627] three   0      0s
I0513 23:35:39.627] two     0      0s
I0513 23:35:39.628] STATUS    REASON          MESSAGE
I0513 23:35:39.628] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0513 23:35:39.628] has not:watch is only supported on individual resources
I0513 23:35:40.727] Successful
I0513 23:35:40.727] message:STATUS    REASON          MESSAGE
I0513 23:35:40.728] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0513 23:35:40.728] has not:watch is only supported on individual resources
I0513 23:35:40.735] +++ [0513 23:35:40] Creating namespace namespace-1557790540-22742
I0513 23:35:40.819] namespace/namespace-1557790540-22742 created
I0513 23:35:40.901] Context "test" modified.
I0513 23:35:41.005] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 23:35:41.222] (Bpod/valid-pod created
... skipping 104 lines ...
I0513 23:35:41.331] }
I0513 23:35:41.431] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 23:35:41.710] (B<no value>Successful
I0513 23:35:41.710] message:valid-pod:
I0513 23:35:41.710] has:valid-pod:
I0513 23:35:41.805] Successful
I0513 23:35:41.805] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0513 23:35:41.806] 	template was:
I0513 23:35:41.806] 		{.missing}
I0513 23:35:41.806] 	object given to jsonpath engine was:
I0513 23:35:41.808] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-05-13T23:35:41Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-05-13T23:35:41Z"}}, "name":"valid-pod", "namespace":"namespace-1557790540-22742", "resourceVersion":"731", "selfLink":"/api/v1/namespaces/namespace-1557790540-22742/pods/valid-pod", "uid":"e9a3a39e-15cb-4552-b15b-5e4627f913bd"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0513 23:35:41.808] has:missing is not found
I0513 23:35:41.900] Successful
I0513 23:35:41.900] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0513 23:35:41.900] 	template was:
I0513 23:35:41.901] 		{{.missing}}
I0513 23:35:41.901] 	raw data was:
I0513 23:35:41.902] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-05-13T23:35:41Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-05-13T23:35:41Z"}],"name":"valid-pod","namespace":"namespace-1557790540-22742","resourceVersion":"731","selfLink":"/api/v1/namespaces/namespace-1557790540-22742/pods/valid-pod","uid":"e9a3a39e-15cb-4552-b15b-5e4627f913bd"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0513 23:35:41.902] 	object given to template engine was:
I0513 23:35:41.903] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-05-13T23:35:41Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-05-13T23:35:41Z]] name:valid-pod namespace:namespace-1557790540-22742 resourceVersion:731 selfLink:/api/v1/namespaces/namespace-1557790540-22742/pods/valid-pod uid:e9a3a39e-15cb-4552-b15b-5e4627f913bd] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0513 23:35:41.903] has:map has no entry for key "missing"
W0513 23:35:42.004] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0513 23:35:43.020] Successful
I0513 23:35:43.020] message:NAME        READY   STATUS    RESTARTS   AGE
I0513 23:35:43.021] valid-pod   0/1     Pending   0          1s
I0513 23:35:43.021] STATUS      REASON          MESSAGE
I0513 23:35:43.021] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0513 23:35:43.021] has:STATUS
I0513 23:35:43.022] Successful
I0513 23:35:43.023] message:NAME        READY   STATUS    RESTARTS   AGE
I0513 23:35:43.023] valid-pod   0/1     Pending   0          1s
I0513 23:35:43.023] STATUS      REASON          MESSAGE
I0513 23:35:43.023] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0513 23:35:43.023] has:valid-pod
I0513 23:35:44.121] Successful
I0513 23:35:44.122] message:pod/valid-pod
I0513 23:35:44.122] has not:STATUS
I0513 23:35:44.124] Successful
I0513 23:35:44.124] message:pod/valid-pod
... skipping 142 lines ...
I0513 23:35:45.270]   terminationGracePeriodSeconds: 30
I0513 23:35:45.270] status:
I0513 23:35:45.270]   phase: Pending
I0513 23:35:45.270]   qosClass: Guaranteed
I0513 23:35:45.270] has:name: valid-pod
I0513 23:35:45.426] Successful
I0513 23:35:45.427] message:Error from server (NotFound): pods "invalid-pod" not found
I0513 23:35:45.427] has:"invalid-pod" not found
I0513 23:35:45.615] pod "valid-pod" deleted
I0513 23:35:45.881] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 23:35:46.272] (Bpod/redis-master created
I0513 23:35:46.282] pod/valid-pod created
I0513 23:35:46.501] Successful
... skipping 283 lines ...
I0513 23:35:55.838] +++ command: run_kubectl_exec_pod_tests
I0513 23:35:55.852] +++ [0513 23:35:55] Creating namespace namespace-1557790555-30230
I0513 23:35:55.938] namespace/namespace-1557790555-30230 created
I0513 23:35:56.023] Context "test" modified.
I0513 23:35:56.032] +++ [0513 23:35:56] Testing kubectl exec POD COMMAND
I0513 23:35:56.142] Successful
I0513 23:35:56.142] message:Error from server (NotFound): pods "abc" not found
I0513 23:35:56.142] has:pods "abc" not found
I0513 23:35:56.376] pod/test-pod created
I0513 23:35:56.506] Successful
I0513 23:35:56.507] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0513 23:35:56.507] has not:pods "test-pod" not found
I0513 23:35:56.509] Successful
I0513 23:35:56.509] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0513 23:35:56.509] has not:pod or type/name must be specified
I0513 23:35:56.593] pod "test-pod" deleted
I0513 23:35:56.616] +++ exit code: 0
I0513 23:35:57.174] Recording: run_kubectl_exec_resource_name_tests
I0513 23:35:57.174] Running command: run_kubectl_exec_resource_name_tests
I0513 23:35:57.203] 
... skipping 2 lines ...
I0513 23:35:57.211] +++ command: run_kubectl_exec_resource_name_tests
I0513 23:35:57.225] +++ [0513 23:35:57] Creating namespace namespace-1557790557-770
I0513 23:35:57.307] namespace/namespace-1557790557-770 created
I0513 23:35:57.388] Context "test" modified.
I0513 23:35:57.399] +++ [0513 23:35:57] Testing kubectl exec TYPE/NAME COMMAND
I0513 23:35:57.509] Successful
I0513 23:35:57.510] message:error: the server doesn't have a resource type "foo"
I0513 23:35:57.510] has:error:
I0513 23:35:57.608] Successful
I0513 23:35:57.609] message:Error from server (NotFound): deployments.extensions "bar" not found
I0513 23:35:57.609] has:"bar" not found
I0513 23:35:57.818] pod/test-pod created
I0513 23:35:58.050] replicaset.apps/frontend created
W0513 23:35:58.151] I0513 23:35:58.058823   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790557-770", Name:"frontend", UID:"83538f78-f054-498f-94b3-874b4990f46a", APIVersion:"apps/v1", ResourceVersion:"851", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mrl9p
W0513 23:35:58.151] I0513 23:35:58.064658   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790557-770", Name:"frontend", UID:"83538f78-f054-498f-94b3-874b4990f46a", APIVersion:"apps/v1", ResourceVersion:"851", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vl8dk
W0513 23:35:58.152] I0513 23:35:58.069646   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790557-770", Name:"frontend", UID:"83538f78-f054-498f-94b3-874b4990f46a", APIVersion:"apps/v1", ResourceVersion:"851", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-54mf5
I0513 23:35:58.284] configmap/test-set-env-config created
I0513 23:35:58.400] Successful
I0513 23:35:58.401] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0513 23:35:58.401] has:not implemented
I0513 23:35:58.522] Successful
I0513 23:35:58.522] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0513 23:35:58.522] has not:not found
I0513 23:35:58.524] Successful
I0513 23:35:58.524] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0513 23:35:58.524] has not:pod or type/name must be specified
I0513 23:35:58.658] Successful
I0513 23:35:58.658] message:Error from server (BadRequest): pod frontend-54mf5 does not have a host assigned
I0513 23:35:58.658] has not:not found
I0513 23:35:58.660] Successful
I0513 23:35:58.661] message:Error from server (BadRequest): pod frontend-54mf5 does not have a host assigned
I0513 23:35:58.661] has not:pod or type/name must be specified
I0513 23:35:58.740] pod "test-pod" deleted
I0513 23:35:58.831] replicaset.extensions "frontend" deleted
I0513 23:35:58.932] configmap "test-set-env-config" deleted
I0513 23:35:58.956] +++ exit code: 0
I0513 23:35:59.005] Recording: run_create_secret_tests
I0513 23:35:59.005] Running command: run_create_secret_tests
I0513 23:35:59.031] 
I0513 23:35:59.033] +++ Running case: test-cmd.run_create_secret_tests 
I0513 23:35:59.036] +++ working dir: /go/src/k8s.io/kubernetes
I0513 23:35:59.039] +++ command: run_create_secret_tests
I0513 23:35:59.136] Successful
I0513 23:35:59.136] message:Error from server (NotFound): secrets "mysecret" not found
I0513 23:35:59.136] has:secrets "mysecret" not found
I0513 23:35:59.311] Successful
I0513 23:35:59.311] message:Error from server (NotFound): secrets "mysecret" not found
I0513 23:35:59.311] has:secrets "mysecret" not found
I0513 23:35:59.313] Successful
I0513 23:35:59.313] message:user-specified
I0513 23:35:59.313] has:user-specified
I0513 23:35:59.391] Successful
I0513 23:35:59.486] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"695c03f0-0a91-4ef0-a21f-2437ea47925d","resourceVersion":"871","creationTimestamp":"2019-05-13T23:35:59Z"}}
... skipping 164 lines ...
I0513 23:36:03.228] valid-pod   0/1     Pending   0          1s
I0513 23:36:03.229] has:valid-pod
I0513 23:36:04.356] Successful
I0513 23:36:04.356] message:NAME        READY   STATUS    RESTARTS   AGE
I0513 23:36:04.357] valid-pod   0/1     Pending   0          1s
I0513 23:36:04.357] STATUS      REASON          MESSAGE
I0513 23:36:04.357] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0513 23:36:04.357] has:Timeout exceeded while reading body
I0513 23:36:04.457] Successful
I0513 23:36:04.457] message:NAME        READY   STATUS    RESTARTS   AGE
I0513 23:36:04.458] valid-pod   0/1     Pending   0          2s
I0513 23:36:04.458] has:valid-pod
I0513 23:36:04.549] Successful
I0513 23:36:04.549] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0513 23:36:04.549] has:Invalid timeout value
I0513 23:36:04.636] pod "valid-pod" deleted
I0513 23:36:04.662] +++ exit code: 0
I0513 23:36:05.273] Recording: run_crd_tests
I0513 23:36:05.274] Running command: run_crd_tests
I0513 23:36:05.299] 
... skipping 250 lines ...
W0513 23:36:10.821] I0513 23:36:09.780530   50903 controller_utils.go:1036] Caches are synced for resource quota controller
I0513 23:36:10.921] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0513 23:36:10.921] (Bfoo.company.com/test patched
I0513 23:36:11.034] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0513 23:36:11.133] (Bfoo.company.com/test patched
I0513 23:36:11.239] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0513 23:36:11.428] (B+++ [0513 23:36:11] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0513 23:36:11.503] {
I0513 23:36:11.503]     "apiVersion": "company.com/v1",
I0513 23:36:11.504]     "kind": "Foo",
I0513 23:36:11.504]     "metadata": {
I0513 23:36:11.504]         "annotations": {
I0513 23:36:11.504]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 319 lines ...
I0513 23:36:23.363] (Bnamespace/non-native-resources created
I0513 23:36:23.668] bar.company.com/test created
I0513 23:36:23.834] crd.sh:456: Successful get bars {{len .items}}: 1
I0513 23:36:23.968] (Bnamespace "non-native-resources" deleted
I0513 23:36:29.263] crd.sh:459: Successful get bars {{len .items}}: 0
I0513 23:36:29.443] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0513 23:36:29.544] Error from server (NotFound): namespaces "non-native-resources" not found
I0513 23:36:29.644] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0513 23:36:29.661] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0513 23:36:29.776] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0513 23:36:29.821] +++ exit code: 0
I0513 23:36:30.275] Recording: run_cmd_with_img_tests
I0513 23:36:30.276] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0513 23:36:30.692] I0513 23:36:30.691446   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790590-13529", Name:"test1-7b9c75bcb9", UID:"1915f2f9-287c-4066-bdd5-98b2689e2566", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-7b9c75bcb9-wfg5x
I0513 23:36:30.793] Successful
I0513 23:36:30.793] message:deployment.apps/test1 created
I0513 23:36:30.794] has:deployment.apps/test1 created
I0513 23:36:30.850] deployment.extensions "test1" deleted
I0513 23:36:30.970] Successful
I0513 23:36:30.971] message:error: Invalid image name "InvalidImageName": invalid reference format
I0513 23:36:30.971] has:error: Invalid image name "InvalidImageName": invalid reference format
I0513 23:36:30.989] +++ exit code: 0
I0513 23:36:31.597] +++ [0513 23:36:31] Testing recursive resources
I0513 23:36:31.607] +++ [0513 23:36:31] Creating namespace namespace-1557790591-19728
I0513 23:36:31.737] namespace/namespace-1557790591-19728 created
I0513 23:36:31.854] Context "test" modified.
I0513 23:36:32.035] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 23:36:32.577] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0513 23:36:32.578] (BSuccessful
I0513 23:36:32.578] message:pod/busybox0 created
I0513 23:36:32.578] pod/busybox1 created
I0513 23:36:32.578] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0513 23:36:32.579] has:error validating data: kind not set
I0513 23:36:32.711] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0513 23:36:32.987] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0513 23:36:32.995] (BSuccessful
I0513 23:36:32.996] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0513 23:36:32.997] has:Object 'Kind' is missing
I0513 23:36:33.145] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0513 23:36:33.621] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0513 23:36:33.625] (BSuccessful
I0513 23:36:33.625] message:pod/busybox0 replaced
I0513 23:36:33.625] pod/busybox1 replaced
I0513 23:36:33.625] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0513 23:36:33.625] has:error validating data: kind not set
I0513 23:36:33.781] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0513 23:36:33.974] (BSuccessful
I0513 23:36:33.975] message:Name:         busybox0
I0513 23:36:33.975] Namespace:    namespace-1557790591-19728
I0513 23:36:33.975] Priority:     0
I0513 23:36:33.976] Node:         <none>
... skipping 154 lines ...
W0513 23:36:34.107] I0513 23:36:34.107337   50903 namespace_controller.go:171] Namespace has been deleted non-native-resources
I0513 23:36:34.208] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0513 23:36:34.431] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0513 23:36:34.434] (BSuccessful
I0513 23:36:34.435] message:pod/busybox0 annotated
I0513 23:36:34.435] pod/busybox1 annotated
I0513 23:36:34.436] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0513 23:36:34.437] has:Object 'Kind' is missing
I0513 23:36:34.598] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0513 23:36:35.043] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0513 23:36:35.046] (BSuccessful
I0513 23:36:35.047] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0513 23:36:35.047] pod/busybox0 configured
I0513 23:36:35.048] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0513 23:36:35.048] pod/busybox1 configured
I0513 23:36:35.049] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0513 23:36:35.050] has:error validating data: kind not set
I0513 23:36:35.208] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 23:36:35.465] (Bdeployment.apps/nginx created
W0513 23:36:35.566] I0513 23:36:35.473734   50903 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557790591-19728", Name:"nginx", UID:"ce982496-59df-4794-a9fd-56d3dbe04481", APIVersion:"apps/v1", ResourceVersion:"1051", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-958dc566b to 3
W0513 23:36:35.567] I0513 23:36:35.487231   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790591-19728", Name:"nginx-958dc566b", UID:"d4b34f1e-9a6d-43c6-884c-7db45afee45f", APIVersion:"apps/v1", ResourceVersion:"1052", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-4xp8n
W0513 23:36:35.568] I0513 23:36:35.494671   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790591-19728", Name:"nginx-958dc566b", UID:"d4b34f1e-9a6d-43c6-884c-7db45afee45f", APIVersion:"apps/v1", ResourceVersion:"1052", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-7cxzp
W0513 23:36:35.568] I0513 23:36:35.501943   50903 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557790591-19728", Name:"nginx-958dc566b", UID:"d4b34f1e-9a6d-43c6-884c-7db45afee45f", APIVersion:"apps/v1", ResourceVersion:"1052", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-fczz5
... skipping 45 lines ...
I0513 23:36:36.122] has:apps/v1
W0513 23:36:36.223] kubectl convert is DEPRECATED and will be removed in a future version.
W0513 23:36:36.223] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0513 23:36:36.324] deployment.extensions "nginx" deleted
I0513 23:36:36.420] Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: busybox0:busybox1:, got: busybox0:busybox1:nginx-958dc566b-4xp8n:nginx-958dc566b-7cxzp:nginx-958dc566b-fczz5:
I0513 23:36:36.422] 
I0513 23:36:36.428] generic-resources.sh:280: FAIL!
I0513 23:36:36.429] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I0513 23:36:36.430]   Expected: busybox0:busybox1:
I0513 23:36:36.430]   Got:      busybox0:busybox1:nginx-958dc566b-4xp8n:nginx-958dc566b-7cxzp:nginx-958dc566b-fczz5:
I0513 23:36:36.430] (B
I0513 23:36:36.431] 51 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I0513 23:36:36.431] (B
... skipping 23 lines ...
W0513 23:36:36.539] I0513 23:36:36.487773   47575 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0513 23:36:36.539] I0513 23:36:36.487782   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.539] I0513 23:36:36.487797   47575 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0513 23:36:36.539] I0513 23:36:36.487814   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.540] I0513 23:36:36.487823   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.540] I0513 23:36:36.487873   47575 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0513 23:36:36.540] W0513 23:36:36.488033   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.540] W0513 23:36:36.488036   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.541] W0513 23:36:36.488084   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.541] W0513 23:36:36.488091   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.541] W0513 23:36:36.488138   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.542] W0513 23:36:36.488141   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.542] W0513 23:36:36.488145   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.542] W0513 23:36:36.488205   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.542] W0513 23:36:36.488211   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.543] W0513 23:36:36.488249   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.543] W0513 23:36:36.488255   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.543] W0513 23:36:36.488261   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.544] I0513 23:36:36.488276   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.544] W0513 23:36:36.488289   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.544] W0513 23:36:36.488334   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.545] W0513 23:36:36.488361   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.545] W0513 23:36:36.488395   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.545] I0513 23:36:36.488414   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.545] W0513 23:36:36.488422   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.546] I0513 23:36:36.488441   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.546] I0513 23:36:36.488425   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.546] I0513 23:36:36.488340   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.546] W0513 23:36:36.488486   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.547] I0513 23:36:36.488292   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.547] I0513 23:36:36.488487   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.547] I0513 23:36:36.485285   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.547] W0513 23:36:36.488530   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.547] I0513 23:36:36.488539   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.548] W0513 23:36:36.488547   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.548] I0513 23:36:36.488364   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.548] W0513 23:36:36.488584   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.549] I0513 23:36:36.484591   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.549] W0513 23:36:36.488623   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.549] W0513 23:36:36.488652   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.549] I0513 23:36:36.488653   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.550] I0513 23:36:36.485332   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.550] W0513 23:36:36.488698   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.550] I0513 23:36:36.488709   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.550] W0513 23:36:36.488710   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.551] I0513 23:36:36.485388   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.551] I0513 23:36:36.488742   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.551] I0513 23:36:36.484547   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.551] I0513 23:36:36.488758   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.552] W0513 23:36:36.488213   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.552] W0513 23:36:36.488769   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.552] W0513 23:36:36.488742   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.552] I0513 23:36:36.485418   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.553] W0513 23:36:36.488792   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.553] I0513 23:36:36.488796   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.553] I0513 23:36:36.485471   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.553] W0513 23:36:36.488811   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.554] I0513 23:36:36.488817   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.554] I0513 23:36:36.485691   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.554] W0513 23:36:36.488833   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.554] I0513 23:36:36.488837   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.555] I0513 23:36:36.485720   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.555] W0513 23:36:36.488850   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.555] I0513 23:36:36.485747   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.555] W0513 23:36:36.488870   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.556] W0513 23:36:36.488084   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.556] I0513 23:36:36.488855   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.556] W0513 23:36:36.488667   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.556] I0513 23:36:36.488871   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.556] I0513 23:36:36.486146   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.557] I0513 23:36:36.490048   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.557] I0513 23:36:36.486208   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.557] I0513 23:36:36.490080   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.557] I0513 23:36:36.486232   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 32 lines ...
W0513 23:36:36.564] I0513 23:36:36.487445   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.564] I0513 23:36:36.487469   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.564] I0513 23:36:36.487491   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.564] I0513 23:36:36.487512   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.565] I0513 23:36:36.487706   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.565] I0513 23:36:36.487732   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.565] W0513 23:36:36.488293   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.566] W0513 23:36:36.488302   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.566] I0513 23:36:36.488317   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.566] W0513 23:36:36.488327   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.566] W0513 23:36:36.488381   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.567] I0513 23:36:36.488390   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.567] W0513 23:36:36.488450   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.567] I0513 23:36:36.488462   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.567] W0513 23:36:36.488510   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.568] W0513 23:36:36.488512   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.568] W0513 23:36:36.488610   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.568] I0513 23:36:36.488675   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.568] I0513 23:36:36.485880   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.569] I0513 23:36:36.486101   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.569] W0513 23:36:36.488893   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.569] W0513 23:36:36.488918   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.569] I0513 23:36:36.488916   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.570] W0513 23:36:36.488950   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.570] W0513 23:36:36.488958   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.570] W0513 23:36:36.488997   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.571] W0513 23:36:36.488998   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.571] W0513 23:36:36.488997   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.571] I0513 23:36:36.489025   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.571] W0513 23:36:36.489032   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.572] W0513 23:36:36.489040   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.572] W0513 23:36:36.489078   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.572] I0513 23:36:36.489089   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.572] I0513 23:36:36.489108   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.573] W0513 23:36:36.489117   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.573] I0513 23:36:36.489125   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.573] W0513 23:36:36.489185   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.574] W0513 23:36:36.489189   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.574] I0513 23:36:36.485361   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.574] W0513 23:36:36.489220   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.574] W0513 23:36:36.489222   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.575] W0513 23:36:36.489260   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.575] W0513 23:36:36.489273   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.575] W0513 23:36:36.489291   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.576] W0513 23:36:36.489297   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.576] W0513 23:36:36.489319   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.576] W0513 23:36:36.489340   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.577] W0513 23:36:36.489359   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.577] W0513 23:36:36.489359   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.577] W0513 23:36:36.489381   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.578] W0513 23:36:36.489399   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.578] W0513 23:36:36.489405   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.578] I0513 23:36:36.484503   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.578] I0513 23:36:36.489534   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.579] I0513 23:36:36.489588   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.579] I0513 23:36:36.489589   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.579] W0513 23:36:36.489592   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.579] W0513 23:36:36.489632   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.580] W0513 23:36:36.489631   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.580] E0513 23:36:36.489646   47575 controller.go:179] rpc error: code = Unavailable desc = transport is closing
W0513 23:36:36.580] W0513 23:36:36.489668   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.581] W0513 23:36:36.489685   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.581] I0513 23:36:36.490107   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.581] I0513 23:36:36.490180   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.581] I0513 23:36:36.490204   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.581] I0513 23:36:36.490218   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.582] I0513 23:36:36.490231   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.582] I0513 23:36:36.490246   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.582] I0513 23:36:36.490260   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.582] I0513 23:36:36.490293   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.582] I0513 23:36:36.490318   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.583] I0513 23:36:36.490348   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.583] W0513 23:36:36.490385   47575 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 23:36:36.583] I0513 23:36:36.490394   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.583] I0513 23:36:36.490415   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.584] I0513 23:36:36.490487   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.584] I0513 23:36:36.490508   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.584] I0513 23:36:36.490561   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.584] I0513 23:36:36.490594   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 31 lines ...
W0513 23:36:36.590] I0513 23:36:36.492270   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.591] I0513 23:36:36.492714   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.591] I0513 23:36:36.492764   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.591] I0513 23:36:36.492779   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.591] I0513 23:36:36.492793   47575 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 23:36:36.591] W0513 23:36:36.492811   47575 clientconn.go:960] grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
W0513 23:36:36.650] make: *** [test-cmd] Error 1
I0513 23:36:36.751] junit report dir: /workspace/artifacts
I0513 23:36:36.751] +++ [0513 23:36:36] Clean up complete
I0513 23:36:36.751] Makefile:329: recipe for target 'test-cmd' failed
W0513 23:37:16.676] Traceback (most recent call last):
W0513 23:37:16.676]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0513 23:37:16.677]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0513 23:37:16.677]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0513 23:37:16.677]     check(*cmd)
W0513 23:37:16.677]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0513 23:37:16.677]     subprocess.check_call(cmd)
W0513 23:37:16.677]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0513 23:37:16.696]     raise CalledProcessError(retcode, cmd)
W0513 23:37:16.697] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.14-v20190318-2ac98e338', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0513 23:37:16.697] Command failed
I0513 23:37:16.697] process 481 exited with code 1 after 15.1m
E0513 23:37:16.698] FAIL: ci-kubernetes-integration-master
I0513 23:37:16.698] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0513 23:37:17.741] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0513 23:37:17.819] process 67530 exited with code 0 after 0.0m
I0513 23:37:17.820] Call:  gcloud config get-value account
I0513 23:37:18.298] process 67542 exited with code 0 after 0.0m
I0513 23:37:18.298] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0513 23:37:18.299] Upload result and artifacts...
I0513 23:37:18.299] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/1128077696966856708
I0513 23:37:18.299] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/1128077696966856708/artifacts
W0513 23:37:19.958] CommandException: One or more URLs matched no objects.
E0513 23:37:20.153] Command failed
I0513 23:37:20.153] process 67554 exited with code 1 after 0.0m
W0513 23:37:20.153] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/1128077696966856708/artifacts not exist yet
I0513 23:37:20.154] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/1128077696966856708/artifacts
I0513 23:37:23.743] process 67696 exited with code 0 after 0.1m
W0513 23:37:23.744] metadata path /workspace/_artifacts/metadata.json does not exist
W0513 23:37:23.744] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...