This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 89 succeeded
Started2019-05-13 10:08
Elapsed16m51s
Revision
Buildergke-prow-containerd-pool-99179761-521q
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/e9000551-3f49-4c68-889a-22a1d73b1a4b/targets/test'}}
pode2bfde77-7566-11e9-bdf5-0a580a6c1546
resultstorehttps://source.cloud.google.com/results/invocations/e9000551-3f49-4c68-889a-22a1d73b1a4b/targets/test
infra-commit6b6fd130f
pode2bfde77-7566-11e9-bdf5-0a580a6c1546
repok8s.io/kubernetes
repo-commitf5a1ceb1fcb3572d37865da8257f66659f8004a9
repos{u'k8s.io/kubernetes': u'master'}

No Test Failures!


Show 89 Passed Tests

Error lines from build-log.txt

... skipping 307 lines ...
W0513 10:20:00.036] I0513 10:20:00.036242   47072 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0513 10:20:00.037] I0513 10:20:00.036348   47072 server.go:558] external host was not specified, using 172.17.0.2
W0513 10:20:00.037] W0513 10:20:00.036361   47072 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0513 10:20:00.037] I0513 10:20:00.036939   47072 server.go:145] Version: v1.15.0-alpha.3.261+f5a1ceb1fcb357
W0513 10:20:00.588] I0513 10:20:00.587546   47072 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0513 10:20:00.588] I0513 10:20:00.587572   47072 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0513 10:20:00.588] E0513 10:20:00.587999   47072 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:00.589] E0513 10:20:00.588038   47072 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:00.589] E0513 10:20:00.588058   47072 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:00.589] E0513 10:20:00.588071   47072 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:00.589] E0513 10:20:00.588095   47072 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:00.589] E0513 10:20:00.588114   47072 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:00.590] E0513 10:20:00.588129   47072 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:00.590] E0513 10:20:00.588143   47072 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:00.590] E0513 10:20:00.588277   47072 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:00.590] E0513 10:20:00.588340   47072 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:00.591] E0513 10:20:00.588374   47072 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:00.591] E0513 10:20:00.588392   47072 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:00.591] I0513 10:20:00.588422   47072 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0513 10:20:00.591] I0513 10:20:00.588436   47072 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0513 10:20:00.592] I0513 10:20:00.590260   47072 client.go:354] parsed scheme: ""
W0513 10:20:00.592] I0513 10:20:00.590284   47072 client.go:354] scheme "" not registered, fallback to default scheme
W0513 10:20:00.592] I0513 10:20:00.590353   47072 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0513 10:20:00.592] I0513 10:20:00.590591   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 361 lines ...
W0513 10:20:01.348] W0513 10:20:01.347744   47072 genericapiserver.go:347] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0513 10:20:01.589] I0513 10:20:01.588897   47072 client.go:354] parsed scheme: ""
W0513 10:20:01.589] I0513 10:20:01.588936   47072 client.go:354] scheme "" not registered, fallback to default scheme
W0513 10:20:01.589] I0513 10:20:01.588990   47072 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0513 10:20:01.590] I0513 10:20:01.589082   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:20:01.590] I0513 10:20:01.589644   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:20:02.557] E0513 10:20:02.557143   47072 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:02.558] E0513 10:20:02.557194   47072 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:02.558] E0513 10:20:02.557230   47072 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:02.558] E0513 10:20:02.557248   47072 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:02.558] E0513 10:20:02.557275   47072 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:02.558] E0513 10:20:02.557303   47072 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:02.559] E0513 10:20:02.557326   47072 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:02.559] E0513 10:20:02.557346   47072 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:02.559] E0513 10:20:02.557392   47072 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:02.559] E0513 10:20:02.557414   47072 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:02.560] E0513 10:20:02.557462   47072 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:02.560] E0513 10:20:02.557508   47072 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0513 10:20:02.560] I0513 10:20:02.557553   47072 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0513 10:20:02.560] I0513 10:20:02.557560   47072 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0513 10:20:02.560] I0513 10:20:02.559230   47072 client.go:354] parsed scheme: ""
W0513 10:20:02.561] I0513 10:20:02.559257   47072 client.go:354] scheme "" not registered, fallback to default scheme
W0513 10:20:02.561] I0513 10:20:02.559316   47072 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0513 10:20:02.561] I0513 10:20:02.559389   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 60 lines ...
W0513 10:20:58.672] I0513 10:20:58.645358   50406 deployment_controller.go:152] Starting deployment controller
W0513 10:20:58.672] I0513 10:20:58.646457   50406 controller_utils.go:1029] Waiting for caches to sync for deployment controller
W0513 10:20:58.673] I0513 10:20:58.645408   50406 pv_protection_controller.go:82] Starting PV protection controller
W0513 10:20:58.673] I0513 10:20:58.646509   50406 controller_utils.go:1029] Waiting for caches to sync for PV protection controller
W0513 10:20:58.673] I0513 10:20:58.646768   50406 controllermanager.go:523] Started "csrapproving"
W0513 10:20:58.673] I0513 10:20:58.646999   50406 node_lifecycle_controller.go:77] Sending events to api server
W0513 10:20:58.673] E0513 10:20:58.647040   50406 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
W0513 10:20:58.673] W0513 10:20:58.647049   50406 controllermanager.go:515] Skipping "cloud-node-lifecycle"
W0513 10:20:58.673] W0513 10:20:58.647058   50406 controllermanager.go:515] Skipping "root-ca-cert-publisher"
W0513 10:20:58.673] I0513 10:20:58.647325   50406 certificate_controller.go:113] Starting certificate controller
W0513 10:20:58.674] I0513 10:20:58.647345   50406 job_controller.go:143] Starting job controller
W0513 10:20:58.674] I0513 10:20:58.647357   50406 controller_utils.go:1029] Waiting for caches to sync for certificate controller
W0513 10:20:58.674] I0513 10:20:58.647374   50406 controller_utils.go:1029] Waiting for caches to sync for job controller
... skipping 51 lines ...
W0513 10:20:58.871] I0513 10:20:58.870305   50406 stateful_set.go:145] Starting stateful set controller
W0513 10:20:58.871] I0513 10:20:58.870404   50406 controller_utils.go:1029] Waiting for caches to sync for stateful set controller
W0513 10:20:58.871] I0513 10:20:58.870327   50406 cronjob_controller.go:96] Starting CronJob Manager
W0513 10:20:58.871] I0513 10:20:58.870567   50406 node_lifecycle_controller.go:388] Controller will reconcile labels.
W0513 10:20:58.871] I0513 10:20:58.870600   50406 node_lifecycle_controller.go:401] Controller will taint node by condition.
W0513 10:20:58.871] I0513 10:20:58.870630   50406 controllermanager.go:523] Started "nodelifecycle"
W0513 10:20:58.872] E0513 10:20:58.871344   50406 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0513 10:20:58.872] W0513 10:20:58.871370   50406 controllermanager.go:515] Skipping "service"
W0513 10:20:58.872] I0513 10:20:58.871375   50406 node_lifecycle_controller.go:425] Starting node controller
W0513 10:20:58.872] I0513 10:20:58.871398   50406 controller_utils.go:1029] Waiting for caches to sync for taint controller
W0513 10:20:58.872] I0513 10:20:58.872063   50406 controllermanager.go:523] Started "clusterrole-aggregation"
W0513 10:20:58.873] I0513 10:20:58.872879   50406 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0513 10:20:58.873] I0513 10:20:58.872910   50406 controller_utils.go:1029] Waiting for caches to sync for ClusterRoleAggregator controller
... skipping 37 lines ...
W0513 10:20:59.323] I0513 10:20:59.323278   50406 controller_utils.go:1029] Waiting for caches to sync for endpoint controller
W0513 10:20:59.324] I0513 10:20:59.324273   50406 controllermanager.go:523] Started "disruption"
W0513 10:20:59.324] I0513 10:20:59.324291   50406 disruption.go:286] Starting disruption controller
W0513 10:20:59.324] I0513 10:20:59.324319   50406 controller_utils.go:1029] Waiting for caches to sync for disruption controller
W0513 10:20:59.392] I0513 10:20:59.391641   50406 controller_utils.go:1036] Caches are synced for GC controller
W0513 10:20:59.398] I0513 10:20:59.398153   50406 controller_utils.go:1036] Caches are synced for PVC protection controller
W0513 10:20:59.406] W0513 10:20:59.405740   50406 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0513 10:20:59.422] I0513 10:20:59.422352   50406 controller_utils.go:1036] Caches are synced for TTL controller
W0513 10:20:59.424] I0513 10:20:59.424300   50406 controller_utils.go:1036] Caches are synced for endpoint controller
W0513 10:20:59.448] I0513 10:20:59.448342   50406 controller_utils.go:1036] Caches are synced for job controller
W0513 10:20:59.449] I0513 10:20:59.448348   50406 controller_utils.go:1036] Caches are synced for PV protection controller
W0513 10:20:59.469] I0513 10:20:59.469300   50406 controller_utils.go:1036] Caches are synced for expand controller
W0513 10:20:59.470] I0513 10:20:59.469300   50406 controller_utils.go:1036] Caches are synced for persistent volume controller
W0513 10:20:59.472] I0513 10:20:59.472504   50406 controller_utils.go:1036] Caches are synced for taint controller
W0513 10:20:59.473] I0513 10:20:59.472622   50406 node_lifecycle_controller.go:1159] Initializing eviction metric for zone: 
W0513 10:20:59.473] I0513 10:20:59.472745   50406 node_lifecycle_controller.go:1009] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0513 10:20:59.473] I0513 10:20:59.473129   50406 taint_manager.go:198] Starting NoExecuteTaintManager
W0513 10:20:59.474] I0513 10:20:59.473280   50406 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0513 10:20:59.474] I0513 10:20:59.473289   50406 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"68b749e4-a50e-4552-ad94-02839b1e85ff", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W0513 10:20:59.481] I0513 10:20:59.481251   50406 controller_utils.go:1036] Caches are synced for HPA controller
W0513 10:20:59.488] E0513 10:20:59.488229   50406 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0513 10:20:59.489] E0513 10:20:59.488705   50406 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0513 10:20:59.496] E0513 10:20:59.495343   50406 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0513 10:20:59.507] E0513 10:20:59.505889   50406 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0513 10:20:59.514] E0513 10:20:59.513429   50406 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0513 10:20:59.569] I0513 10:20:59.569258   50406 controller_utils.go:1036] Caches are synced for attach detach controller
W0513 10:20:59.840] I0513 10:20:59.749242   50406 controller_utils.go:1036] Caches are synced for ReplicationController controller
W0513 10:20:59.852] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0513 10:20:59.870] I0513 10:20:59.870228   50406 controller_utils.go:1036] Caches are synced for daemon sets controller
W0513 10:20:59.872] I0513 10:20:59.871256   50406 controller_utils.go:1036] Caches are synced for stateful set controller
W0513 10:20:59.920] I0513 10:20:59.920243   50406 controller_utils.go:1036] Caches are synced for ReplicaSet controller
... skipping 91 lines ...
I0513 10:21:04.785] +++ working dir: /go/src/k8s.io/kubernetes
I0513 10:21:04.787] +++ command: run_RESTMapper_evaluation_tests
I0513 10:21:04.806] +++ [0513 10:21:04] Creating namespace namespace-1557742864-2332
I0513 10:21:04.906] namespace/namespace-1557742864-2332 created
I0513 10:21:05.009] Context "test" modified.
I0513 10:21:05.020] +++ [0513 10:21:05] Testing RESTMapper
I0513 10:21:05.181] +++ [0513 10:21:05] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0513 10:21:05.201] +++ exit code: 0
I0513 10:21:05.797] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0513 10:21:05.798] bindings                                                                      true         Binding
I0513 10:21:05.798] componentstatuses                 cs                                          false        ComponentStatus
I0513 10:21:05.798] configmaps                        cm                                          true         ConfigMap
I0513 10:21:05.798] endpoints                         ep                                          true         Endpoints
... skipping 640 lines ...
I0513 10:21:31.314] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 10:21:31.575] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 10:21:31.733] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 10:21:32.005] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 10:21:32.156] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 10:21:32.277] (Bpod "valid-pod" force deleted
W0513 10:21:32.378] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0513 10:21:32.379] error: setting 'all' parameter but found a non empty selector. 
W0513 10:21:32.379] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0513 10:21:32.481] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0513 10:21:32.659] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0513 10:21:32.783] (Bnamespace/test-kubectl-describe-pod created
I0513 10:21:32.943] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0513 10:21:33.100] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0513 10:21:34.747] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0513 10:21:34.933] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0513 10:21:35.068] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0513 10:21:35.249] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0513 10:21:35.554] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 10:21:35.956] (Bpod/env-test-pod created
W0513 10:21:36.057] error: min-available and max-unavailable cannot be both specified
I0513 10:21:36.345] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0513 10:21:36.346] Name:         env-test-pod
I0513 10:21:36.346] Namespace:    test-kubectl-describe-pod
I0513 10:21:36.346] Priority:     0
I0513 10:21:36.346] Node:         <none>
I0513 10:21:36.346] Labels:       <none>
... skipping 143 lines ...
I0513 10:21:53.966] (Bservice "modified" deleted
I0513 10:21:54.106] replicationcontroller "modified" deleted
I0513 10:21:54.609] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 10:21:54.870] (Bpod/valid-pod created
I0513 10:21:55.037] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 10:21:55.270] (BSuccessful
I0513 10:21:55.271] message:Error from server: cannot restore map from string
I0513 10:21:55.271] has:cannot restore map from string
W0513 10:21:55.371] E0513 10:21:55.255971   47072 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0513 10:21:55.472] Successful
I0513 10:21:55.473] message:pod/valid-pod patched (no change)
I0513 10:21:55.473] has:patched (no change)
I0513 10:21:55.527] pod/valid-pod patched
I0513 10:21:55.667] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0513 10:21:55.815] (Bcore.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
... skipping 4 lines ...
I0513 10:21:56.544] (Bpod/valid-pod patched
I0513 10:21:56.708] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0513 10:21:56.845] (Bpod/valid-pod patched
I0513 10:21:56.990] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0513 10:21:57.260] (Bpod/valid-pod patched
I0513 10:21:57.447] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0513 10:21:57.725] (B+++ [0513 10:21:57] "kubectl patch with resourceVersion 511" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0513 10:21:58.208] pod "valid-pod" deleted
I0513 10:21:58.225] pod/valid-pod replaced
I0513 10:21:58.399] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0513 10:21:58.760] (BSuccessful
I0513 10:21:58.760] message:error: --grace-period must have --force specified
I0513 10:21:58.760] has:\-\-grace-period must have \-\-force specified
I0513 10:21:59.049] Successful
I0513 10:21:59.049] message:error: --timeout must have --force specified
I0513 10:21:59.049] has:\-\-timeout must have \-\-force specified
I0513 10:21:59.354] node/node-v1-test created
W0513 10:21:59.455] W0513 10:21:59.354398   50406 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
W0513 10:21:59.485] I0513 10:21:59.485068   50406 event.go:258] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"771c91f9-df61-4383-99fa-b086dad2bdad", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-v1-test event: Registered Node node-v1-test in Controller
I0513 10:21:59.670] node/node-v1-test replaced
I0513 10:21:59.855] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0513 10:21:59.967] (Bnode "node-v1-test" deleted
I0513 10:22:00.129] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0513 10:22:00.600] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 17 lines ...
I0513 10:22:02.929]     name: kubernetes-pause
I0513 10:22:02.929] has:localonlyvalue
I0513 10:22:02.999] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0513 10:22:03.248] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0513 10:22:03.377] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0513 10:22:03.491] (Bpod/valid-pod labeled
W0513 10:22:03.594] error: 'name' already has a value (valid-pod), and --overwrite is false
I0513 10:22:03.695] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0513 10:22:03.764] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 10:22:03.887] (Bpod "valid-pod" force deleted
W0513 10:22:03.988] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0513 10:22:04.089] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 10:22:04.089] (B+++ [0513 10:22:04] Creating namespace namespace-1557742924-11005
... skipping 83 lines ...
I0513 10:22:18.439] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0513 10:22:18.444] +++ working dir: /go/src/k8s.io/kubernetes
I0513 10:22:18.449] +++ command: run_kubectl_create_error_tests
I0513 10:22:18.470] +++ [0513 10:22:18] Creating namespace namespace-1557742938-2456
I0513 10:22:18.590] namespace/namespace-1557742938-2456 created
I0513 10:22:18.710] Context "test" modified.
I0513 10:22:18.727] +++ [0513 10:22:18] Testing kubectl create with error
W0513 10:22:18.829] Error: must specify one of -f and -k
W0513 10:22:18.830] 
W0513 10:22:18.830] Create a resource from a file or from stdin.
W0513 10:22:18.830] 
W0513 10:22:18.830]  JSON and YAML formats are accepted.
W0513 10:22:18.831] 
W0513 10:22:18.831] Examples:
... skipping 41 lines ...
W0513 10:22:18.843] 
W0513 10:22:18.843] Usage:
W0513 10:22:18.843]   kubectl create -f FILENAME [options]
W0513 10:22:18.844] 
W0513 10:22:18.844] Use "kubectl <command> --help" for more information about a given command.
W0513 10:22:18.844] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0513 10:22:19.184] +++ [0513 10:22:19] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0513 10:22:19.309] kubectl convert is DEPRECATED and will be removed in a future version.
W0513 10:22:19.309] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0513 10:22:19.482] +++ exit code: 0
I0513 10:22:19.562] Recording: run_kubectl_apply_tests
I0513 10:22:19.563] Running command: run_kubectl_apply_tests
I0513 10:22:19.609] 
... skipping 20 lines ...
W0513 10:22:24.855] I0513 10:22:24.853716   47072 client.go:354] scheme "" not registered, fallback to default scheme
W0513 10:22:24.855] I0513 10:22:24.853776   47072 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0513 10:22:24.856] I0513 10:22:24.853835   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:22:24.856] I0513 10:22:24.854575   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:22:24.857] I0513 10:22:24.856803   47072 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0513 10:22:24.958] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0513 10:22:25.087] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0513 10:22:25.271] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0513 10:22:25.339] +++ exit code: 0
I0513 10:22:25.426] Recording: run_kubectl_run_tests
I0513 10:22:25.426] Running command: run_kubectl_run_tests
I0513 10:22:25.481] 
I0513 10:22:25.485] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 96 lines ...
I0513 10:22:31.580] +++ [0513 10:22:31] Testing kubectl create filter
I0513 10:22:31.600] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 10:22:32.051] (Bpod/selector-test-pod created
W0513 10:22:32.224] I0513 10:22:32.224158   50406 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1557742932-9747
I0513 10:22:32.326] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0513 10:22:32.384] (BSuccessful
I0513 10:22:32.384] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0513 10:22:32.385] has:pods "selector-test-pod-dont-apply" not found
I0513 10:22:32.516] pod "selector-test-pod" deleted
I0513 10:22:32.550] +++ exit code: 0
I0513 10:22:32.630] Recording: run_kubectl_apply_deployments_tests
I0513 10:22:32.631] Running command: run_kubectl_apply_deployments_tests
I0513 10:22:32.666] 
... skipping 38 lines ...
W0513 10:22:37.185] I0513 10:22:37.087143   50406 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557742952-30057", Name:"nginx", UID:"d306fa21-2cc3-4b65-bc5c-b4894518208c", APIVersion:"apps/v1", ResourceVersion:"621", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8c9ccf86d to 3
W0513 10:22:37.396] I0513 10:22:37.097393   50406 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557742952-30057", Name:"nginx-8c9ccf86d", UID:"468bb033-2793-4022-8cc5-f1eb80093d65", APIVersion:"apps/v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-7h44q
W0513 10:22:37.397] I0513 10:22:37.111384   50406 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557742952-30057", Name:"nginx-8c9ccf86d", UID:"468bb033-2793-4022-8cc5-f1eb80093d65", APIVersion:"apps/v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-xvfkk
W0513 10:22:37.398] I0513 10:22:37.114890   50406 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557742952-30057", Name:"nginx-8c9ccf86d", UID:"468bb033-2793-4022-8cc5-f1eb80093d65", APIVersion:"apps/v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-l2kjt
I0513 10:22:37.515] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0513 10:22:41.948] (BSuccessful
I0513 10:22:41.948] message:Error from server (Conflict): error when applying patch:
I0513 10:22:41.949] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557742952-30057\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0513 10:22:41.949] to:
I0513 10:22:41.950] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0513 10:22:41.950] Name: "nginx", Namespace: "namespace-1557742952-30057"
I0513 10:22:41.953] Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557742952-30057\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-05-13T10:22:37Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-05-13T10:22:37Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-05-13T10:22:37Z"]] "name":"nginx" "namespace":"namespace-1557742952-30057" "resourceVersion":"634" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1557742952-30057/deployments/nginx" "uid":"d306fa21-2cc3-4b65-bc5c-b4894518208c"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-05-13T10:22:37Z" "lastUpdateTime":"2019-05-13T10:22:37Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0513 10:22:41.953] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0513 10:22:41.953] has:Error from server (Conflict)
I0513 10:22:47.431] deployment.extensions/nginx configured
W0513 10:22:47.533] I0513 10:22:47.454710   50406 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557742952-30057", Name:"nginx", UID:"3af85343-251c-484c-b858-d945181d70b5", APIVersion:"apps/v1", ResourceVersion:"658", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-86bb9b4d9f to 3
W0513 10:22:47.533] I0513 10:22:47.462912   50406 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557742952-30057", Name:"nginx-86bb9b4d9f", UID:"06f6c490-3b69-4926-b5fd-5e71668129d5", APIVersion:"apps/v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-9zznd
W0513 10:22:47.534] I0513 10:22:47.469589   50406 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557742952-30057", Name:"nginx-86bb9b4d9f", UID:"06f6c490-3b69-4926-b5fd-5e71668129d5", APIVersion:"apps/v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-cmw9r
W0513 10:22:47.534] I0513 10:22:47.472286   50406 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557742952-30057", Name:"nginx-86bb9b4d9f", UID:"06f6c490-3b69-4926-b5fd-5e71668129d5", APIVersion:"apps/v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-fvxd8
I0513 10:22:47.684] Successful
... skipping 192 lines ...
I0513 10:22:57.210] +++ [0513 10:22:57] Creating namespace namespace-1557742977-5716
I0513 10:22:57.363] namespace/namespace-1557742977-5716 created
I0513 10:22:57.504] Context "test" modified.
I0513 10:22:57.517] +++ [0513 10:22:57] Testing kubectl get
I0513 10:22:57.688] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 10:22:57.858] (BSuccessful
I0513 10:22:57.858] message:Error from server (NotFound): pods "abc" not found
I0513 10:22:57.858] has:pods "abc" not found
I0513 10:22:58.031] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 10:22:58.196] (BSuccessful
I0513 10:22:58.197] message:Error from server (NotFound): pods "abc" not found
I0513 10:22:58.197] has:pods "abc" not found
I0513 10:22:58.377] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 10:22:58.555] (BSuccessful
I0513 10:22:58.555] message:{
I0513 10:22:58.556]     "apiVersion": "v1",
I0513 10:22:58.556]     "items": [],
... skipping 23 lines ...
I0513 10:22:59.248] has not:No resources found
I0513 10:22:59.391] Successful
I0513 10:22:59.392] message:NAME
I0513 10:22:59.392] has not:No resources found
I0513 10:22:59.566] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 10:22:59.756] (BSuccessful
I0513 10:22:59.757] message:error: the server doesn't have a resource type "foobar"
I0513 10:22:59.757] has not:No resources found
I0513 10:22:59.906] Successful
I0513 10:22:59.906] message:No resources found.
I0513 10:22:59.907] has:No resources found
I0513 10:23:00.063] Successful
I0513 10:23:00.064] message:
I0513 10:23:00.065] has not:No resources found
I0513 10:23:00.238] Successful
I0513 10:23:00.238] message:No resources found.
I0513 10:23:00.238] has:No resources found
I0513 10:23:00.417] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 10:23:00.566] (BSuccessful
I0513 10:23:00.566] message:Error from server (NotFound): pods "abc" not found
I0513 10:23:00.566] has:pods "abc" not found
I0513 10:23:00.571] FAIL!
I0513 10:23:00.572] message:Error from server (NotFound): pods "abc" not found
I0513 10:23:00.572] has not:List
I0513 10:23:00.572] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0513 10:23:00.787] Successful
I0513 10:23:00.788] message:I0513 10:23:00.683846   60882 loader.go:359] Config loaded from file:  /tmp/tmp.BzpPZbCGZw/.kube/config
I0513 10:23:00.788] I0513 10:23:00.685977   60882 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0513 10:23:00.788] I0513 10:23:00.733943   60882 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 888 lines ...
I0513 10:23:07.266] Successful
I0513 10:23:07.267] message:NAME    DATA   AGE
I0513 10:23:07.267] one     0      1s
I0513 10:23:07.267] three   0      0s
I0513 10:23:07.267] two     0      1s
I0513 10:23:07.267] STATUS    REASON          MESSAGE
I0513 10:23:07.268] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0513 10:23:07.268] has not:watch is only supported on individual resources
I0513 10:23:08.431] Successful
I0513 10:23:08.432] message:STATUS    REASON          MESSAGE
I0513 10:23:08.432] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0513 10:23:08.432] has not:watch is only supported on individual resources
I0513 10:23:08.447] +++ [0513 10:23:08] Creating namespace namespace-1557742988-29205
I0513 10:23:08.566] namespace/namespace-1557742988-29205 created
I0513 10:23:08.694] Context "test" modified.
I0513 10:23:08.866] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 10:23:09.208] (Bpod/valid-pod created
... skipping 104 lines ...
I0513 10:23:09.381] }
I0513 10:23:09.538] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0513 10:23:09.937] (B<no value>Successful
I0513 10:23:09.937] message:valid-pod:
I0513 10:23:09.938] has:valid-pod:
I0513 10:23:10.095] Successful
I0513 10:23:10.096] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0513 10:23:10.096] 	template was:
I0513 10:23:10.097] 		{.missing}
I0513 10:23:10.097] 	object given to jsonpath engine was:
I0513 10:23:10.099] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-05-13T10:23:09Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-05-13T10:23:09Z"}}, "name":"valid-pod", "namespace":"namespace-1557742988-29205", "resourceVersion":"737", "selfLink":"/api/v1/namespaces/namespace-1557742988-29205/pods/valid-pod", "uid":"73a3cebd-d27f-4374-922d-611efc9b618f"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0513 10:23:10.100] has:missing is not found
W0513 10:23:10.229] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0513 10:23:10.330] Successful
I0513 10:23:10.331] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0513 10:23:10.331] 	template was:
I0513 10:23:10.331] 		{{.missing}}
I0513 10:23:10.332] 	raw data was:
I0513 10:23:10.333] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-05-13T10:23:09Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-05-13T10:23:09Z"}],"name":"valid-pod","namespace":"namespace-1557742988-29205","resourceVersion":"737","selfLink":"/api/v1/namespaces/namespace-1557742988-29205/pods/valid-pod","uid":"73a3cebd-d27f-4374-922d-611efc9b618f"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0513 10:23:10.333] 	object given to template engine was:
I0513 10:23:10.336] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-05-13T10:23:09Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-05-13T10:23:09Z]] name:valid-pod namespace:namespace-1557742988-29205 resourceVersion:737 selfLink:/api/v1/namespaces/namespace-1557742988-29205/pods/valid-pod uid:73a3cebd-d27f-4374-922d-611efc9b618f] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0513 10:23:10.336] has:map has no entry for key "missing"
I0513 10:23:11.388] Successful
I0513 10:23:11.389] message:NAME        READY   STATUS    RESTARTS   AGE
I0513 10:23:11.389] valid-pod   0/1     Pending   0          1s
I0513 10:23:11.390] STATUS      REASON          MESSAGE
I0513 10:23:11.390] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0513 10:23:11.390] has:STATUS
I0513 10:23:11.393] Successful
I0513 10:23:11.393] message:NAME        READY   STATUS    RESTARTS   AGE
I0513 10:23:11.394] valid-pod   0/1     Pending   0          1s
I0513 10:23:11.394] STATUS      REASON          MESSAGE
I0513 10:23:11.394] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0513 10:23:11.395] has:valid-pod
I0513 10:23:12.539] Successful
I0513 10:23:12.540] message:pod/valid-pod
I0513 10:23:12.540] has not:STATUS
I0513 10:23:12.541] Successful
I0513 10:23:12.542] message:pod/valid-pod
... skipping 142 lines ...
I0513 10:23:13.718]   terminationGracePeriodSeconds: 30
I0513 10:23:13.719] status:
I0513 10:23:13.719]   phase: Pending
I0513 10:23:13.719]   qosClass: Guaranteed
I0513 10:23:13.719] has:name: valid-pod
I0513 10:23:13.857] Successful
I0513 10:23:13.857] message:Error from server (NotFound): pods "invalid-pod" not found
I0513 10:23:13.857] has:"invalid-pod" not found
I0513 10:23:13.997] pod "valid-pod" deleted
I0513 10:23:14.172] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 10:23:14.559] (Bpod/redis-master created
I0513 10:23:14.566] pod/valid-pod created
I0513 10:23:14.751] Successful
... skipping 283 lines ...
I0513 10:23:24.941] +++ command: run_kubectl_exec_pod_tests
I0513 10:23:24.958] +++ [0513 10:23:24] Creating namespace namespace-1557743004-15820
I0513 10:23:25.075] namespace/namespace-1557743004-15820 created
I0513 10:23:25.191] Context "test" modified.
I0513 10:23:25.209] +++ [0513 10:23:25] Testing kubectl exec POD COMMAND
I0513 10:23:25.363] Successful
I0513 10:23:25.364] message:Error from server (NotFound): pods "abc" not found
I0513 10:23:25.365] has:pods "abc" not found
I0513 10:23:25.723] pod/test-pod created
I0513 10:23:25.946] Successful
I0513 10:23:25.947] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0513 10:23:25.947] has not:pods "test-pod" not found
I0513 10:23:25.958] Successful
I0513 10:23:25.959] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0513 10:23:25.959] has not:pod or type/name must be specified
I0513 10:23:26.093] pod "test-pod" deleted
I0513 10:23:26.128] +++ exit code: 0
I0513 10:23:26.202] Recording: run_kubectl_exec_resource_name_tests
I0513 10:23:26.202] Running command: run_kubectl_exec_resource_name_tests
I0513 10:23:26.242] 
... skipping 2 lines ...
I0513 10:23:26.258] +++ command: run_kubectl_exec_resource_name_tests
I0513 10:23:26.274] +++ [0513 10:23:26] Creating namespace namespace-1557743006-13011
I0513 10:23:26.389] namespace/namespace-1557743006-13011 created
I0513 10:23:26.518] Context "test" modified.
I0513 10:23:26.541] +++ [0513 10:23:26] Testing kubectl exec TYPE/NAME COMMAND
I0513 10:23:26.709] Successful
I0513 10:23:26.710] message:error: the server doesn't have a resource type "foo"
I0513 10:23:26.710] has:error:
I0513 10:23:26.862] Successful
I0513 10:23:26.863] message:Error from server (NotFound): deployments.extensions "bar" not found
I0513 10:23:26.863] has:"bar" not found
I0513 10:23:27.165] pod/test-pod created
I0513 10:23:27.563] replicaset.apps/frontend created
W0513 10:23:27.667] I0513 10:23:27.570285   50406 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557743006-13011", Name:"frontend", UID:"07368571-efbb-4818-86e1-1a56e51a2581", APIVersion:"apps/v1", ResourceVersion:"858", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8f44m
W0513 10:23:27.668] I0513 10:23:27.584509   50406 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557743006-13011", Name:"frontend", UID:"07368571-efbb-4818-86e1-1a56e51a2581", APIVersion:"apps/v1", ResourceVersion:"858", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-v6w8n
W0513 10:23:27.668] I0513 10:23:27.585073   50406 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557743006-13011", Name:"frontend", UID:"07368571-efbb-4818-86e1-1a56e51a2581", APIVersion:"apps/v1", ResourceVersion:"858", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-p8stk
I0513 10:23:27.957] configmap/test-set-env-config created
I0513 10:23:28.147] Successful
I0513 10:23:28.148] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0513 10:23:28.148] has:not implemented
I0513 10:23:28.308] Successful
I0513 10:23:28.310] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0513 10:23:28.311] has not:not found
I0513 10:23:28.311] Successful
I0513 10:23:28.312] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0513 10:23:28.312] has not:pod or type/name must be specified
I0513 10:23:28.481] Successful
I0513 10:23:28.482] message:Error from server (BadRequest): pod frontend-8f44m does not have a host assigned
I0513 10:23:28.482] has not:not found
I0513 10:23:28.485] Successful
I0513 10:23:28.485] message:Error from server (BadRequest): pod frontend-8f44m does not have a host assigned
I0513 10:23:28.486] has not:pod or type/name must be specified
I0513 10:23:28.619] pod "test-pod" deleted
I0513 10:23:28.769] replicaset.extensions "frontend" deleted
I0513 10:23:28.923] configmap "test-set-env-config" deleted
I0513 10:23:28.970] +++ exit code: 0
I0513 10:23:29.058] Recording: run_create_secret_tests
I0513 10:23:29.058] Running command: run_create_secret_tests
I0513 10:23:29.094] 
I0513 10:23:29.100] +++ Running case: test-cmd.run_create_secret_tests 
I0513 10:23:29.106] +++ working dir: /go/src/k8s.io/kubernetes
I0513 10:23:29.112] +++ command: run_create_secret_tests
I0513 10:23:29.272] Successful
I0513 10:23:29.278] message:Error from server (NotFound): secrets "mysecret" not found
I0513 10:23:29.278] has:secrets "mysecret" not found
I0513 10:23:29.609] Successful
I0513 10:23:29.610] message:Error from server (NotFound): secrets "mysecret" not found
I0513 10:23:29.610] has:secrets "mysecret" not found
I0513 10:23:29.613] Successful
I0513 10:23:29.613] message:user-specified
I0513 10:23:29.613] has:user-specified
I0513 10:23:29.746] Successful
I0513 10:23:29.889] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"44443835-b79c-461f-8457-2a976441461f","resourceVersion":"881","creationTimestamp":"2019-05-13T10:23:29Z"}}
... skipping 164 lines ...
I0513 10:23:35.660] valid-pod   0/1     Pending   0          0s
I0513 10:23:35.661] has:valid-pod
I0513 10:23:36.815] Successful
I0513 10:23:36.816] message:NAME        READY   STATUS    RESTARTS   AGE
I0513 10:23:36.816] valid-pod   0/1     Pending   0          0s
I0513 10:23:36.816] STATUS      REASON          MESSAGE
I0513 10:23:36.816] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0513 10:23:36.816] has:Timeout exceeded while reading body
I0513 10:23:36.971] Successful
I0513 10:23:36.972] message:NAME        READY   STATUS    RESTARTS   AGE
I0513 10:23:36.972] valid-pod   0/1     Pending   0          1s
I0513 10:23:36.972] has:valid-pod
I0513 10:23:37.115] Successful
I0513 10:23:37.116] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0513 10:23:37.116] has:Invalid timeout value
I0513 10:23:37.279] pod "valid-pod" deleted
I0513 10:23:37.324] +++ exit code: 0
I0513 10:23:37.393] Recording: run_crd_tests
I0513 10:23:37.393] Running command: run_crd_tests
I0513 10:23:37.452] 
... skipping 237 lines ...
I0513 10:23:45.967] foo.company.com/test patched
I0513 10:23:46.129] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0513 10:23:46.265] (Bfoo.company.com/test patched
I0513 10:23:46.431] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0513 10:23:46.584] (Bfoo.company.com/test patched
I0513 10:23:46.734] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0513 10:23:47.015] (B+++ [0513 10:23:47] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0513 10:23:47.124] {
I0513 10:23:47.125]     "apiVersion": "company.com/v1",
I0513 10:23:47.125]     "kind": "Foo",
I0513 10:23:47.125]     "metadata": {
I0513 10:23:47.125]         "annotations": {
I0513 10:23:47.125]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 332 lines ...
I0513 10:24:15.023] (Bnamespace/non-native-resources created
I0513 10:24:15.355] bar.company.com/test created
I0513 10:24:15.546] crd.sh:456: Successful get bars {{len .items}}: 1
I0513 10:24:15.700] (Bnamespace "non-native-resources" deleted
I0513 10:24:21.076] crd.sh:459: Successful get bars {{len .items}}: 0
I0513 10:24:21.345] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0513 10:24:21.448] Error from server (NotFound): namespaces "non-native-resources" not found
I0513 10:24:21.549] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0513 10:24:21.772] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0513 10:24:21.951] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0513 10:24:22.016] +++ exit code: 0
I0513 10:24:22.283] Recording: run_cmd_with_img_tests
I0513 10:24:22.284] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0513 10:24:22.829] I0513 10:24:22.828841   50406 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557743062-28117", Name:"test1-7b9c75bcb9", UID:"b818cd45-c541-4784-9867-be6056569b17", APIVersion:"apps/v1", ResourceVersion:"1048", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-7b9c75bcb9-dqhq8
I0513 10:24:22.931] Successful
I0513 10:24:22.931] message:deployment.apps/test1 created
I0513 10:24:22.932] has:deployment.apps/test1 created
I0513 10:24:22.976] deployment.extensions "test1" deleted
I0513 10:24:23.128] Successful
I0513 10:24:23.129] message:error: Invalid image name "InvalidImageName": invalid reference format
I0513 10:24:23.130] has:error: Invalid image name "InvalidImageName": invalid reference format
I0513 10:24:23.156] +++ exit code: 0
I0513 10:24:23.227] +++ [0513 10:24:23] Testing recursive resources
I0513 10:24:23.235] +++ [0513 10:24:23] Creating namespace namespace-1557743063-14755
I0513 10:24:23.357] namespace/namespace-1557743063-14755 created
I0513 10:24:23.474] Context "test" modified.
I0513 10:24:23.645] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 10:24:24.284] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0513 10:24:24.288] (BSuccessful
I0513 10:24:24.289] message:pod/busybox0 created
I0513 10:24:24.289] pod/busybox1 created
I0513 10:24:24.289] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0513 10:24:24.290] has:error validating data: kind not set
I0513 10:24:24.464] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0513 10:24:24.898] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0513 10:24:24.902] (BSuccessful
I0513 10:24:24.902] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0513 10:24:24.903] has:Object 'Kind' is missing
I0513 10:24:25.087] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0513 10:24:25.752] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0513 10:24:25.758] (BSuccessful
I0513 10:24:25.759] message:pod/busybox0 replaced
I0513 10:24:25.759] pod/busybox1 replaced
I0513 10:24:25.759] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0513 10:24:25.760] has:error validating data: kind not set
W0513 10:24:25.873] I0513 10:24:25.872794   50406 namespace_controller.go:171] Namespace has been deleted non-native-resources
I0513 10:24:25.974] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0513 10:24:26.164] (BSuccessful
I0513 10:24:26.173] message:Name:         busybox0
I0513 10:24:26.174] Namespace:    namespace-1557743063-14755
I0513 10:24:26.174] Priority:     0
... skipping 154 lines ...
I0513 10:24:26.193] has:Object 'Kind' is missing
I0513 10:24:26.373] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0513 10:24:26.888] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0513 10:24:26.893] (BSuccessful
I0513 10:24:26.894] message:pod/busybox0 annotated
I0513 10:24:26.894] pod/busybox1 annotated
I0513 10:24:26.894] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0513 10:24:26.894] has:Object 'Kind' is missing
I0513 10:24:27.073] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0513 10:24:27.749] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0513 10:24:27.754] (BSuccessful
I0513 10:24:27.755] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0513 10:24:27.755] pod/busybox0 configured
I0513 10:24:27.755] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0513 10:24:27.755] pod/busybox1 configured
I0513 10:24:27.756] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0513 10:24:27.756] has:error validating data: kind not set
I0513 10:24:27.947] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0513 10:24:28.323] (Bdeployment.apps/nginx created
W0513 10:24:28.424] I0513 10:24:28.337126   50406 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557743063-14755", Name:"nginx", UID:"97882d05-8509-4de6-bac7-10070a0435ae", APIVersion:"apps/v1", ResourceVersion:"1075", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-958dc566b to 3
W0513 10:24:28.424] I0513 10:24:28.341907   50406 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557743063-14755", Name:"nginx-958dc566b", UID:"80fcd87e-6837-4c4a-b474-0cb060f45810", APIVersion:"apps/v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-zcncc
W0513 10:24:28.425] I0513 10:24:28.357941   50406 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557743063-14755", Name:"nginx-958dc566b", UID:"80fcd87e-6837-4c4a-b474-0cb060f45810", APIVersion:"apps/v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-sfj9j
W0513 10:24:28.425] I0513 10:24:28.366186   50406 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557743063-14755", Name:"nginx-958dc566b", UID:"80fcd87e-6837-4c4a-b474-0cb060f45810", APIVersion:"apps/v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-2j8cd
... skipping 45 lines ...
I0513 10:24:29.097] has:apps/v1
W0513 10:24:29.198] kubectl convert is DEPRECATED and will be removed in a future version.
W0513 10:24:29.199] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0513 10:24:29.300] deployment.extensions "nginx" deleted
I0513 10:24:29.469] Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: busybox0:busybox1:, got: busybox0:busybox1:nginx-958dc566b-2j8cd:nginx-958dc566b-sfj9j:nginx-958dc566b-zcncc:
I0513 10:24:29.475] 
I0513 10:24:29.486] generic-resources.sh:280: FAIL!
I0513 10:24:29.487] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I0513 10:24:29.487]   Expected: busybox0:busybox1:
I0513 10:24:29.488]   Got:      busybox0:busybox1:nginx-958dc566b-2j8cd:nginx-958dc566b-sfj9j:nginx-958dc566b-zcncc:
I0513 10:24:29.488] (B
I0513 10:24:29.489] 51 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I0513 10:24:29.489] (B
... skipping 15 lines ...
W0513 10:24:29.602] I0513 10:24:29.561898   47072 secure_serving.go:160] Stopped listening on 127.0.0.1:8080
W0513 10:24:29.602] I0513 10:24:29.562463   47072 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0513 10:24:29.603] I0513 10:24:29.562568   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.603] I0513 10:24:29.562898   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.603] I0513 10:24:29.563557   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
W0513 10:24:29.604] I0513 10:24:29.565828   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.604] W0513 10:24:29.566129   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.605] W0513 10:24:29.566189   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.605] W0513 10:24:29.566244   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.605] W0513 10:24:29.566139   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.606] I0513 10:24:29.563717   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.606] I0513 10:24:29.566283   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.606] W0513 10:24:29.566291   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.607] I0513 10:24:29.563718   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.607] I0513 10:24:29.566309   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.607] I0513 10:24:29.563746   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.607] I0513 10:24:29.566326   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.607] I0513 10:24:29.563761   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.608] I0513 10:24:29.566347   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.608] I0513 10:24:29.563786   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.608] I0513 10:24:29.566364   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.608] W0513 10:24:29.566367   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.609] I0513 10:24:29.563786   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.609] I0513 10:24:29.566383   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.609] I0513 10:24:29.563811   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.609] I0513 10:24:29.566400   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.609] I0513 10:24:29.563814   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.610] W0513 10:24:29.566408   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.610] I0513 10:24:29.566417   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.610] W0513 10:24:29.566448   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.610] W0513 10:24:29.566482   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.611] I0513 10:24:29.563840   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.611] W0513 10:24:29.566506   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.611] I0513 10:24:29.566511   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.611] I0513 10:24:29.563870   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.612] I0513 10:24:29.566534   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.612] I0513 10:24:29.563932   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.612] W0513 10:24:29.566544   47072 clientconn.go:960] grpc: addrConn.transportMonitor exits due to: grpc: the connection is closing
W0513 10:24:29.612] I0513 10:24:29.566553   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.612] I0513 10:24:29.563964   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.613] I0513 10:24:29.566571   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.613] I0513 10:24:29.563988   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.613] I0513 10:24:29.566587   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.613] I0513 10:24:29.563991   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.614] I0513 10:24:29.566603   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.614] I0513 10:24:29.564470   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.614] W0513 10:24:29.566606   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.614] I0513 10:24:29.566620   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.615] I0513 10:24:29.564505   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.615] I0513 10:24:29.566637   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.615] I0513 10:24:29.564533   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.615] I0513 10:24:29.566653   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.615] W0513 10:24:29.566655   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.616] I0513 10:24:29.564610   47072 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0513 10:24:29.616] I0513 10:24:29.564665   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.616] I0513 10:24:29.566692   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.616] W0513 10:24:29.566695   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.617] I0513 10:24:29.564691   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.617] I0513 10:24:29.566710   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.617] I0513 10:24:29.564729   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.617] I0513 10:24:29.566728   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.618] I0513 10:24:29.564734   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.618] W0513 10:24:29.566734   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.618] I0513 10:24:29.566744   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.618] I0513 10:24:29.564772   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.619] I0513 10:24:29.566773   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.619] I0513 10:24:29.564819   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.619] I0513 10:24:29.566789   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.619] W0513 10:24:29.566790   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.620] I0513 10:24:29.564832   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.620] I0513 10:24:29.566805   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.620] I0513 10:24:29.565088   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.620] I0513 10:24:29.566822   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.620] I0513 10:24:29.565116   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.621] W0513 10:24:29.566835   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.621] I0513 10:24:29.566838   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.621] I0513 10:24:29.565127   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.621] I0513 10:24:29.566855   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.622] I0513 10:24:29.565141   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.622] I0513 10:24:29.566874   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.622] W0513 10:24:29.566876   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.623] I0513 10:24:29.565167   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.623] I0513 10:24:29.566891   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.623] I0513 10:24:29.564794   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.623] I0513 10:24:29.566917   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.623] I0513 10:24:29.565236   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.624] I0513 10:24:29.566950   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.624] W0513 10:24:29.566952   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.624] I0513 10:24:29.565272   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.624] I0513 10:24:29.566981   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.625] I0513 10:24:29.565295   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.625] W0513 10:24:29.566990   47072 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0513 10:24:29.625] I0513 10:24:29.566999   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.625] I0513 10:24:29.565295   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.626] I0513 10:24:29.567020   47072 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0513 10:24:29.626] E0513 10:24:29.565344   47072 controller.go:179] rpc error: code = Unavailable desc = transport is closing
I0513 10:24:29.872] junit report dir: /workspace/artifacts
I0513 10:24:29.879] +++ [0513 10:24:29] Clean up complete
I0513 10:24:29.889] Makefile:328: recipe for target 'test-cmd' failed
W0513 10:24:29.991] make: *** [test-cmd] Error 1
W0513 10:24:46.859] Traceback (most recent call last):
W0513 10:24:46.860]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0513 10:24:46.860]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0513 10:24:46.860]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0513 10:24:46.860]     check(*cmd)
W0513 10:24:46.861]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0513 10:24:46.861]     subprocess.check_call(cmd)
W0513 10:24:46.861]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0513 10:24:46.874]     raise CalledProcessError(retcode, cmd)
W0513 10:24:46.875] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=y', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.14-v20190318-2ac98e338', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0513 10:24:46.886] Command failed
I0513 10:24:46.887] process 497 exited with code 1 after 15.1m
E0513 10:24:46.887] FAIL: ci-kubernetes-integration-master
I0513 10:24:46.888] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0513 10:24:47.928] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0513 10:24:48.021] process 67162 exited with code 0 after 0.0m
I0513 10:24:48.021] Call:  gcloud config get-value account
I0513 10:24:48.640] process 67174 exited with code 0 after 0.0m
I0513 10:24:48.641] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0513 10:24:48.641] Upload result and artifacts...
I0513 10:24:48.641] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-integration-master/1127878026411905027
I0513 10:24:48.642] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/1127878026411905027/artifacts
W0513 10:24:50.468] CommandException: One or more URLs matched no objects.
E0513 10:24:50.729] Command failed
I0513 10:24:50.729] process 67186 exited with code 1 after 0.0m
W0513 10:24:50.729] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/1127878026411905027/artifacts not exist yet
I0513 10:24:50.729] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-integration-master/1127878026411905027/artifacts
I0513 10:24:53.927] process 67328 exited with code 0 after 0.1m
W0513 10:24:53.928] metadata path /workspace/_artifacts/metadata.json does not exist
W0513 10:24:53.928] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...