This job view page is being replaced by Spyglass soon. Check out the new job view.
PRchardch: E2E test for GPU job interrupted by node recreate
ResultFAILURE
Tests 1 failed / 1398 succeeded
Started2019-05-16 00:11
Elapsed29m40s
Revision
Buildergke-prow-containerd-pool-99179761-02c3
Refs master:aaec77a9
76401:91e01e84
pod1942e764-776f-11e9-b8b0-0a580a6c014a
infra-commit3350b5955
pod1942e764-776f-11e9-b8b0-0a580a6c014a
repok8s.io/kubernetes
repo-commit5945096bc5f99c1108133737ba79d1b49c193d4f
repos{u'k8s.io/kubernetes': u'master:aaec77a94b67878ca1bdd884f2778f4388d203f2,76401:91e01e849dc2f1258037816825d610012f7e7abd'}

Test Failures


k8s.io/kubernetes/test/integration/auth [build failed] 0.00s

k8s.io/kubernetes/test/integration/auth [build failed]
from junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190516-002750.xml

Show 1398 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 319 lines ...
W0516 00:21:41.051] I0516 00:21:41.050871   47549 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0516 00:21:41.052] I0516 00:21:41.051008   47549 server.go:558] external host was not specified, using 172.17.0.2
W0516 00:21:41.052] W0516 00:21:41.051035   47549 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0516 00:21:41.052] I0516 00:21:41.051730   47549 server.go:145] Version: v1.16.0-alpha.0.61+5945096bc5f99c
W0516 00:21:41.669] I0516 00:21:41.668889   47549 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0516 00:21:41.670] I0516 00:21:41.668920   47549 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0516 00:21:41.670] E0516 00:21:41.669410   47549 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:41.670] E0516 00:21:41.669510   47549 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:41.671] E0516 00:21:41.669537   47549 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:41.671] E0516 00:21:41.669569   47549 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:41.671] E0516 00:21:41.669596   47549 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:41.671] E0516 00:21:41.669616   47549 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:41.672] E0516 00:21:41.669645   47549 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:41.672] E0516 00:21:41.669677   47549 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:41.672] E0516 00:21:41.669723   47549 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:41.672] E0516 00:21:41.669799   47549 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:41.672] E0516 00:21:41.669842   47549 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:41.673] E0516 00:21:41.669856   47549 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:41.673] I0516 00:21:41.669872   47549 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0516 00:21:41.673] I0516 00:21:41.669883   47549 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0516 00:21:41.673] I0516 00:21:41.671408   47549 client.go:354] parsed scheme: ""
W0516 00:21:41.674] I0516 00:21:41.671447   47549 client.go:354] scheme "" not registered, fallback to default scheme
W0516 00:21:41.674] I0516 00:21:41.671512   47549 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0516 00:21:41.674] I0516 00:21:41.671592   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 361 lines ...
W0516 00:21:42.260] W0516 00:21:42.259801   47549 genericapiserver.go:347] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0516 00:21:42.664] I0516 00:21:42.664390   47549 client.go:354] parsed scheme: ""
W0516 00:21:42.665] I0516 00:21:42.665546   47549 client.go:354] scheme "" not registered, fallback to default scheme
W0516 00:21:42.666] I0516 00:21:42.666169   47549 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0516 00:21:42.667] I0516 00:21:42.666832   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:21:42.668] I0516 00:21:42.667956   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:21:43.167] E0516 00:21:43.166214   47549 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:43.167] E0516 00:21:43.166286   47549 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:43.167] E0516 00:21:43.166335   47549 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:43.167] E0516 00:21:43.166352   47549 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:43.168] E0516 00:21:43.167562   47549 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:43.168] E0516 00:21:43.167620   47549 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:43.168] E0516 00:21:43.167635   47549 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:43.168] E0516 00:21:43.167654   47549 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:43.168] E0516 00:21:43.167697   47549 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:43.168] E0516 00:21:43.167766   47549 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:43.169] E0516 00:21:43.167800   47549 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:43.169] E0516 00:21:43.167815   47549 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0516 00:21:43.169] I0516 00:21:43.167848   47549 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0516 00:21:43.169] I0516 00:21:43.167861   47549 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0516 00:21:43.169] I0516 00:21:43.169155   47549 client.go:354] parsed scheme: ""
W0516 00:21:43.169] I0516 00:21:43.169178   47549 client.go:354] scheme "" not registered, fallback to default scheme
W0516 00:21:43.169] I0516 00:21:43.169214   47549 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0516 00:21:43.170] I0516 00:21:43.169254   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 87 lines ...
W0516 00:22:23.627] I0516 00:22:23.552245   50880 resource_quota_monitor.go:303] QuotaMonitor running
W0516 00:22:23.627] I0516 00:22:23.552825   50880 controllermanager.go:523] Started "serviceaccount"
W0516 00:22:23.627] W0516 00:22:23.552844   50880 controllermanager.go:502] "bootstrapsigner" is disabled
W0516 00:22:23.627] I0516 00:22:23.552945   50880 serviceaccounts_controller.go:115] Starting service account controller
W0516 00:22:23.627] I0516 00:22:23.552986   50880 controller_utils.go:1029] Waiting for caches to sync for service account controller
W0516 00:22:23.628] I0516 00:22:23.553304   50880 node_lifecycle_controller.go:77] Sending events to api server
W0516 00:22:23.628] E0516 00:22:23.553402   50880 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
W0516 00:22:23.628] W0516 00:22:23.553415   50880 controllermanager.go:515] Skipping "cloud-node-lifecycle"
W0516 00:22:23.628] W0516 00:22:23.553465   50880 controllermanager.go:515] Skipping "ttl-after-finished"
W0516 00:22:23.950] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0516 00:22:23.963] I0516 00:22:23.962212   50880 garbagecollector.go:130] Starting garbage collector controller
W0516 00:22:23.963] I0516 00:22:23.962223   50880 controllermanager.go:523] Started "garbagecollector"
W0516 00:22:23.963] I0516 00:22:23.962281   50880 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
... skipping 24 lines ...
W0516 00:22:23.977] I0516 00:22:23.976909   50880 controller_utils.go:1029] Waiting for caches to sync for disruption controller
W0516 00:22:23.977] I0516 00:22:23.977521   50880 controllermanager.go:523] Started "cronjob"
W0516 00:22:23.978] I0516 00:22:23.977646   50880 cronjob_controller.go:96] Starting CronJob Manager
W0516 00:22:23.978] I0516 00:22:23.978058   50880 controllermanager.go:523] Started "ttl"
W0516 00:22:23.978] I0516 00:22:23.978365   50880 ttl_controller.go:116] Starting TTL controller
W0516 00:22:23.978] I0516 00:22:23.978393   50880 controller_utils.go:1029] Waiting for caches to sync for TTL controller
W0516 00:22:23.979] E0516 00:22:23.978849   50880 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0516 00:22:23.979] W0516 00:22:23.978868   50880 controllermanager.go:515] Skipping "service"
W0516 00:22:23.980] I0516 00:22:23.980077   50880 controllermanager.go:523] Started "job"
W0516 00:22:23.980] I0516 00:22:23.980217   50880 job_controller.go:143] Starting job controller
W0516 00:22:23.980] I0516 00:22:23.980248   50880 controller_utils.go:1029] Waiting for caches to sync for job controller
W0516 00:22:23.982] I0516 00:22:23.981612   50880 controllermanager.go:523] Started "horizontalpodautoscaling"
W0516 00:22:23.982] W0516 00:22:23.981679   50880 controllermanager.go:515] Skipping "csrsigning"
... skipping 36 lines ...
W0516 00:22:23.998] I0516 00:22:23.998240   50880 controllermanager.go:523] Started "persistentvolume-binder"
W0516 00:22:23.999] I0516 00:22:23.998267   50880 pv_controller_base.go:271] Starting persistent volume controller
W0516 00:22:23.999] I0516 00:22:23.998296   50880 controller_utils.go:1029] Waiting for caches to sync for persistent volume controller
W0516 00:22:23.999] I0516 00:22:23.998800   50880 controllermanager.go:523] Started "clusterrole-aggregation"
W0516 00:22:23.999] I0516 00:22:23.999103   50880 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
W0516 00:22:23.999] I0516 00:22:23.999189   50880 controller_utils.go:1029] Waiting for caches to sync for ClusterRoleAggregator controller
W0516 00:22:24.018] W0516 00:22:24.017557   50880 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0516 00:22:24.066] I0516 00:22:24.065681   50880 controller_utils.go:1036] Caches are synced for stateful set controller
W0516 00:22:24.074] I0516 00:22:24.074003   50880 controller_utils.go:1036] Caches are synced for deployment controller
W0516 00:22:24.076] I0516 00:22:24.075961   50880 controller_utils.go:1036] Caches are synced for ReplicationController controller
W0516 00:22:24.077] I0516 00:22:24.076219   50880 controller_utils.go:1036] Caches are synced for GC controller
W0516 00:22:24.078] I0516 00:22:24.077543   50880 controller_utils.go:1036] Caches are synced for disruption controller
W0516 00:22:24.078] I0516 00:22:24.077576   50880 disruption.go:294] Sending events to api server.
W0516 00:22:24.078] I0516 00:22:24.078554   50880 controller_utils.go:1036] Caches are synced for TTL controller
W0516 00:22:24.081] I0516 00:22:24.080503   50880 controller_utils.go:1036] Caches are synced for job controller
W0516 00:22:24.083] I0516 00:22:24.082672   50880 controller_utils.go:1036] Caches are synced for PVC protection controller
W0516 00:22:24.084] I0516 00:22:24.084085   50880 controller_utils.go:1036] Caches are synced for endpoint controller
W0516 00:22:24.093] I0516 00:22:24.092973   50880 controller_utils.go:1036] Caches are synced for ReplicaSet controller
W0516 00:22:24.100] I0516 00:22:24.099507   50880 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0516 00:22:24.114] E0516 00:22:24.113724   50880 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0516 00:22:24.115] E0516 00:22:24.113988   50880 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0516 00:22:24.127] E0516 00:22:24.126183   50880 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0516 00:22:24.183] I0516 00:22:24.182335   50880 controller_utils.go:1036] Caches are synced for HPA controller
W0516 00:22:24.253] I0516 00:22:24.253211   50880 controller_utils.go:1036] Caches are synced for service account controller
W0516 00:22:24.257] I0516 00:22:24.256486   47549 controller.go:606] quota admission added evaluator for: serviceaccounts
W0516 00:22:24.273] I0516 00:22:24.272991   50880 controller_utils.go:1036] Caches are synced for namespace controller
W0516 00:22:24.286] I0516 00:22:24.285751   50880 controller_utils.go:1036] Caches are synced for daemon sets controller
W0516 00:22:24.298] I0516 00:22:24.297348   50880 controller_utils.go:1036] Caches are synced for taint controller
... skipping 93 lines ...
I0516 00:22:28.376] +++ working dir: /go/src/k8s.io/kubernetes
I0516 00:22:28.380] +++ command: run_RESTMapper_evaluation_tests
I0516 00:22:28.392] +++ [0516 00:22:28] Creating namespace namespace-1557966148-304
I0516 00:22:28.464] namespace/namespace-1557966148-304 created
I0516 00:22:28.538] Context "test" modified.
I0516 00:22:28.549] +++ [0516 00:22:28] Testing RESTMapper
I0516 00:22:28.664] +++ [0516 00:22:28] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0516 00:22:28.685] +++ exit code: 0
I0516 00:22:28.819] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0516 00:22:28.820] bindings                                                                      true         Binding
I0516 00:22:28.820] componentstatuses                 cs                                          false        ComponentStatus
I0516 00:22:28.820] configmaps                        cm                                          true         ConfigMap
I0516 00:22:28.820] endpoints                         ep                                          true         Endpoints
... skipping 640 lines ...
I0516 00:22:50.053] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:22:50.255] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:22:50.359] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:22:50.543] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:22:50.642] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:22:50.735] (Bpod "valid-pod" force deleted
W0516 00:22:50.836] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0516 00:22:50.836] error: setting 'all' parameter but found a non empty selector. 
W0516 00:22:50.836] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0516 00:22:50.937] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0516 00:22:50.958] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0516 00:22:51.042] (Bnamespace/test-kubectl-describe-pod created
I0516 00:22:51.150] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0516 00:22:51.252] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0516 00:22:52.313] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0516 00:22:52.427] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0516 00:22:52.509] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0516 00:22:52.624] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0516 00:22:52.799] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:22:53.031] (Bpod/env-test-pod created
W0516 00:22:53.131] error: min-available and max-unavailable cannot be both specified
I0516 00:22:53.267] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0516 00:22:53.268] Name:         env-test-pod
I0516 00:22:53.268] Namespace:    test-kubectl-describe-pod
I0516 00:22:53.268] Priority:     0
I0516 00:22:53.268] Node:         <none>
I0516 00:22:53.268] Labels:       <none>
... skipping 143 lines ...
I0516 00:23:06.272] (Bservice "modified" deleted
I0516 00:23:06.366] replicationcontroller "modified" deleted
I0516 00:23:06.705] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:23:06.907] (Bpod/valid-pod created
I0516 00:23:07.027] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:23:07.193] (BSuccessful
I0516 00:23:07.193] message:Error from server: cannot restore map from string
I0516 00:23:07.194] has:cannot restore map from string
I0516 00:23:07.294] Successful
I0516 00:23:07.294] message:pod/valid-pod patched (no change)
I0516 00:23:07.294] has:patched (no change)
I0516 00:23:07.384] pod/valid-pod patched
I0516 00:23:07.489] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0516 00:23:07.596] (Bcore.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
I0516 00:23:07.689] (Bpod/valid-pod patched
I0516 00:23:07.793] core.sh:461: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx2:
I0516 00:23:07.880] (Bpod/valid-pod patched
W0516 00:23:07.981] E0516 00:23:07.181019   47549 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0516 00:23:08.081] core.sh:465: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0516 00:23:08.087] (Bpod/valid-pod patched
I0516 00:23:08.213] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0516 00:23:08.296] (Bpod/valid-pod patched
I0516 00:23:08.409] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0516 00:23:08.586] (Bpod/valid-pod patched
I0516 00:23:08.703] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0516 00:23:08.914] (B+++ [0516 00:23:08] "kubectl patch with resourceVersion 502" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0516 00:23:09.224] pod "valid-pod" deleted
I0516 00:23:09.237] pod/valid-pod replaced
I0516 00:23:09.362] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0516 00:23:09.580] (BSuccessful
I0516 00:23:09.580] message:error: --grace-period must have --force specified
I0516 00:23:09.580] has:\-\-grace-period must have \-\-force specified
I0516 00:23:09.792] Successful
I0516 00:23:09.792] message:error: --timeout must have --force specified
I0516 00:23:09.792] has:\-\-timeout must have \-\-force specified
W0516 00:23:10.000] W0516 00:23:09.999903   50880 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0516 00:23:10.101] node/node-v1-test created
I0516 00:23:10.221] node/node-v1-test replaced
I0516 00:23:10.342] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0516 00:23:10.431] (Bnode "node-v1-test" deleted
I0516 00:23:10.549] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0516 00:23:10.904] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 17 lines ...
I0516 00:23:12.781] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0516 00:23:12.872] (Bpod/valid-pod labeled
W0516 00:23:12.972] Edit cancelled, no changes made.
W0516 00:23:12.973] Edit cancelled, no changes made.
W0516 00:23:12.973] Edit cancelled, no changes made.
W0516 00:23:12.973] Edit cancelled, no changes made.
W0516 00:23:12.973] error: 'name' already has a value (valid-pod), and --overwrite is false
I0516 00:23:13.074] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0516 00:23:13.085] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:23:13.177] (Bpod "valid-pod" force deleted
W0516 00:23:13.277] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0516 00:23:13.378] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:23:13.379] (B+++ [0516 00:23:13] Creating namespace namespace-1557966193-16753
... skipping 82 lines ...
I0516 00:23:21.284] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0516 00:23:21.288] +++ working dir: /go/src/k8s.io/kubernetes
I0516 00:23:21.291] +++ command: run_kubectl_create_error_tests
I0516 00:23:21.306] +++ [0516 00:23:21] Creating namespace namespace-1557966201-6835
I0516 00:23:21.385] namespace/namespace-1557966201-6835 created
I0516 00:23:21.460] Context "test" modified.
I0516 00:23:21.471] +++ [0516 00:23:21] Testing kubectl create with error
W0516 00:23:21.572] Error: must specify one of -f and -k
W0516 00:23:21.572] 
W0516 00:23:21.572] Create a resource from a file or from stdin.
W0516 00:23:21.572] 
W0516 00:23:21.572]  JSON and YAML formats are accepted.
W0516 00:23:21.572] 
W0516 00:23:21.572] Examples:
... skipping 41 lines ...
W0516 00:23:21.577] 
W0516 00:23:21.577] Usage:
W0516 00:23:21.577]   kubectl create -f FILENAME [options]
W0516 00:23:21.577] 
W0516 00:23:21.577] Use "kubectl <command> --help" for more information about a given command.
W0516 00:23:21.577] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0516 00:23:21.772] +++ [0516 00:23:21] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0516 00:23:21.873] kubectl convert is DEPRECATED and will be removed in a future version.
W0516 00:23:21.873] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0516 00:23:21.974] +++ exit code: 0
I0516 00:23:22.019] Recording: run_kubectl_apply_tests
I0516 00:23:22.019] Running command: run_kubectl_apply_tests
I0516 00:23:22.045] 
... skipping 20 lines ...
W0516 00:23:24.609] I0516 00:23:24.608508   47549 client.go:354] scheme "" not registered, fallback to default scheme
W0516 00:23:24.609] I0516 00:23:24.608551   47549 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0516 00:23:24.610] I0516 00:23:24.608604   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:23:24.610] I0516 00:23:24.609101   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:23:24.614] I0516 00:23:24.613579   47549 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0516 00:23:24.714] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0516 00:23:24.815] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0516 00:23:24.916] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0516 00:23:24.916] +++ exit code: 0
I0516 00:23:24.919] Recording: run_kubectl_run_tests
I0516 00:23:24.919] Running command: run_kubectl_run_tests
I0516 00:23:24.949] 
I0516 00:23:24.952] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 97 lines ...
I0516 00:23:28.942] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:23:29.143] (Bpod/selector-test-pod created
W0516 00:23:29.243] kubectl run --generator=cronjob/v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0516 00:23:29.244] I0516 00:23:28.062294   47549 controller.go:606] quota admission added evaluator for: cronjobs.batch
I0516 00:23:29.344] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0516 00:23:29.369] (BSuccessful
I0516 00:23:29.369] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0516 00:23:29.370] has:pods "selector-test-pod-dont-apply" not found
I0516 00:23:29.448] pod "selector-test-pod" deleted
I0516 00:23:29.477] +++ exit code: 0
I0516 00:23:29.526] Recording: run_kubectl_apply_deployments_tests
I0516 00:23:29.526] Running command: run_kubectl_apply_deployments_tests
I0516 00:23:29.553] 
... skipping 27 lines ...
I0516 00:23:31.619] (Bdeployment.extensions "my-depl" deleted
I0516 00:23:31.628] replicaset.extensions "my-depl-588655868c" deleted
I0516 00:23:31.633] replicaset.extensions "my-depl-69cd868dd5" deleted
I0516 00:23:31.640] pod "my-depl-588655868c-x876p" deleted
I0516 00:23:31.650] pod "my-depl-69cd868dd5-4hvqk" deleted
W0516 00:23:31.750] I0516 00:23:31.627486   47549 controller.go:606] quota admission added evaluator for: replicasets.extensions
W0516 00:23:31.751] E0516 00:23:31.647574   50880 replica_set.go:450] Sync "namespace-1557966209-29335/my-depl-588655868c" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-588655868c": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1557966209-29335/my-depl-588655868c, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: be1e1bfb-5315-4678-a1d7-451b482125f7, UID in object meta: 
W0516 00:23:31.751] E0516 00:23:31.651382   50880 replica_set.go:450] Sync "namespace-1557966209-29335/my-depl-588655868c" failed with replicasets.apps "my-depl-588655868c" not found
I0516 00:23:31.852] apps.sh:137: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:23:31.885] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:23:31.989] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:23:32.093] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:23:32.307] (Bdeployment.extensions/nginx created
W0516 00:23:32.408] I0516 00:23:32.312843   50880 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557966209-29335", Name:"nginx", UID:"8f5e8b70-88ec-4415-b1cb-fa46c232924d", APIVersion:"apps/v1", ResourceVersion:"600", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8c9ccf86d to 3
W0516 00:23:32.409] I0516 00:23:32.317771   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966209-29335", Name:"nginx-8c9ccf86d", UID:"e4ea88c2-aba4-4075-86e2-f1c6be44719b", APIVersion:"apps/v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-wz589
W0516 00:23:32.409] I0516 00:23:32.321410   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966209-29335", Name:"nginx-8c9ccf86d", UID:"e4ea88c2-aba4-4075-86e2-f1c6be44719b", APIVersion:"apps/v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-w67t2
W0516 00:23:32.409] I0516 00:23:32.323881   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966209-29335", Name:"nginx-8c9ccf86d", UID:"e4ea88c2-aba4-4075-86e2-f1c6be44719b", APIVersion:"apps/v1", ResourceVersion:"601", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-2hv98
I0516 00:23:32.510] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0516 00:23:36.761] (BSuccessful
I0516 00:23:36.761] message:Error from server (Conflict): error when applying patch:
I0516 00:23:36.762] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557966209-29335\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0516 00:23:36.762] to:
I0516 00:23:36.762] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0516 00:23:36.762] Name: "nginx", Namespace: "namespace-1557966209-29335"
I0516 00:23:36.764] Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557966209-29335\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-05-16T00:23:32Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-05-16T00:23:32Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-05-16T00:23:32Z"]] "name":"nginx" "namespace":"namespace-1557966209-29335" "resourceVersion":"613" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1557966209-29335/deployments/nginx" "uid":"8f5e8b70-88ec-4415-b1cb-fa46c232924d"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-05-16T00:23:32Z" "lastUpdateTime":"2019-05-16T00:23:32Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0516 00:23:36.765] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0516 00:23:36.765] has:Error from server (Conflict)
W0516 00:23:36.865] I0516 00:23:35.621705   50880 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1557966198-15351
W0516 00:23:41.167] E0516 00:23:41.166930   50880 replica_set.go:450] Sync "namespace-1557966209-29335/nginx-8c9ccf86d" failed with replicasets.apps "nginx-8c9ccf86d" not found
I0516 00:23:42.092] deployment.extensions/nginx configured
W0516 00:23:42.193] I0516 00:23:42.097685   50880 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557966209-29335", Name:"nginx", UID:"dfa56393-617c-43fd-bbc0-2779cf9d6502", APIVersion:"apps/v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-86bb9b4d9f to 3
W0516 00:23:42.194] I0516 00:23:42.102671   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966209-29335", Name:"nginx-86bb9b4d9f", UID:"771e02c5-91b9-4e84-a49d-1a496b09efe0", APIVersion:"apps/v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-9fsh4
W0516 00:23:42.194] I0516 00:23:42.106507   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966209-29335", Name:"nginx-86bb9b4d9f", UID:"771e02c5-91b9-4e84-a49d-1a496b09efe0", APIVersion:"apps/v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-g54pk
W0516 00:23:42.194] I0516 00:23:42.106633   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966209-29335", Name:"nginx-86bb9b4d9f", UID:"771e02c5-91b9-4e84-a49d-1a496b09efe0", APIVersion:"apps/v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-8r55x
I0516 00:23:42.295] Successful
... skipping 168 lines ...
I0516 00:23:50.098] +++ [0516 00:23:50] Creating namespace namespace-1557966230-32276
I0516 00:23:50.194] namespace/namespace-1557966230-32276 created
I0516 00:23:50.279] Context "test" modified.
I0516 00:23:50.291] +++ [0516 00:23:50] Testing kubectl get
I0516 00:23:50.396] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:23:50.487] (BSuccessful
I0516 00:23:50.487] message:Error from server (NotFound): pods "abc" not found
I0516 00:23:50.487] has:pods "abc" not found
I0516 00:23:50.584] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:23:50.674] (BSuccessful
I0516 00:23:50.675] message:Error from server (NotFound): pods "abc" not found
I0516 00:23:50.675] has:pods "abc" not found
I0516 00:23:50.776] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:23:50.872] (BSuccessful
I0516 00:23:50.872] message:{
I0516 00:23:50.872]     "apiVersion": "v1",
I0516 00:23:50.872]     "items": [],
... skipping 23 lines ...
I0516 00:23:51.247] has not:No resources found
I0516 00:23:51.342] Successful
I0516 00:23:51.342] message:NAME
I0516 00:23:51.342] has not:No resources found
I0516 00:23:51.442] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:23:51.549] (BSuccessful
I0516 00:23:51.549] message:error: the server doesn't have a resource type "foobar"
I0516 00:23:51.549] has not:No resources found
I0516 00:23:51.640] Successful
I0516 00:23:51.640] message:No resources found.
I0516 00:23:51.640] has:No resources found
I0516 00:23:51.731] Successful
I0516 00:23:51.731] message:
I0516 00:23:51.731] has not:No resources found
I0516 00:23:51.828] Successful
I0516 00:23:51.828] message:No resources found.
I0516 00:23:51.829] has:No resources found
I0516 00:23:51.927] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:23:52.019] (BSuccessful
I0516 00:23:52.019] message:Error from server (NotFound): pods "abc" not found
I0516 00:23:52.019] has:pods "abc" not found
I0516 00:23:52.022] FAIL!
I0516 00:23:52.022] message:Error from server (NotFound): pods "abc" not found
I0516 00:23:52.022] has not:List
I0516 00:23:52.022] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0516 00:23:52.162] Successful
I0516 00:23:52.162] message:I0516 00:23:52.095395   61603 loader.go:359] Config loaded from file:  /tmp/tmp.nbZsG1srxx/.kube/config
I0516 00:23:52.162] I0516 00:23:52.097041   61603 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0516 00:23:52.163] I0516 00:23:52.120746   61603 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 888 lines ...
I0516 00:23:57.878] Successful
I0516 00:23:57.878] message:NAME    DATA   AGE
I0516 00:23:57.879] one     0      0s
I0516 00:23:57.879] three   0      0s
I0516 00:23:57.879] two     0      0s
I0516 00:23:57.879] STATUS    REASON          MESSAGE
I0516 00:23:57.879] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 00:23:57.879] has not:watch is only supported on individual resources
I0516 00:23:58.979] Successful
I0516 00:23:58.980] message:STATUS    REASON          MESSAGE
I0516 00:23:58.980] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 00:23:58.980] has not:watch is only supported on individual resources
I0516 00:23:58.988] +++ [0516 00:23:58] Creating namespace namespace-1557966238-29276
I0516 00:23:59.086] namespace/namespace-1557966238-29276 created
I0516 00:23:59.171] Context "test" modified.
I0516 00:23:59.293] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:23:59.514] (Bpod/valid-pod created
... skipping 104 lines ...
I0516 00:23:59.637] }
I0516 00:23:59.756] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:24:00.058] (B<no value>Successful
I0516 00:24:00.059] message:valid-pod:
I0516 00:24:00.059] has:valid-pod:
I0516 00:24:00.173] Successful
I0516 00:24:00.174] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0516 00:24:00.174] 	template was:
I0516 00:24:00.174] 		{.missing}
I0516 00:24:00.174] 	object given to jsonpath engine was:
I0516 00:24:00.176] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-05-16T00:23:59Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-05-16T00:23:59Z"}}, "name":"valid-pod", "namespace":"namespace-1557966238-29276", "resourceVersion":"711", "selfLink":"/api/v1/namespaces/namespace-1557966238-29276/pods/valid-pod", "uid":"638822a7-6540-466f-a411-39ce41d7d01c"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0516 00:24:00.176] has:missing is not found
W0516 00:24:00.276] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0516 00:24:00.377] Successful
I0516 00:24:00.377] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0516 00:24:00.377] 	template was:
I0516 00:24:00.377] 		{{.missing}}
I0516 00:24:00.378] 	raw data was:
I0516 00:24:00.378] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-05-16T00:23:59Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-05-16T00:23:59Z"}],"name":"valid-pod","namespace":"namespace-1557966238-29276","resourceVersion":"711","selfLink":"/api/v1/namespaces/namespace-1557966238-29276/pods/valid-pod","uid":"638822a7-6540-466f-a411-39ce41d7d01c"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0516 00:24:00.378] 	object given to template engine was:
I0516 00:24:00.379] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-05-16T00:23:59Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-05-16T00:23:59Z]] name:valid-pod namespace:namespace-1557966238-29276 resourceVersion:711 selfLink:/api/v1/namespaces/namespace-1557966238-29276/pods/valid-pod uid:638822a7-6540-466f-a411-39ce41d7d01c] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0516 00:24:00.379] has:map has no entry for key "missing"
I0516 00:24:01.373] Successful
I0516 00:24:01.374] message:NAME        READY   STATUS    RESTARTS   AGE
I0516 00:24:01.374] valid-pod   0/1     Pending   0          1s
I0516 00:24:01.374] STATUS      REASON          MESSAGE
I0516 00:24:01.374] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 00:24:01.374] has:STATUS
I0516 00:24:01.376] Successful
I0516 00:24:01.376] message:NAME        READY   STATUS    RESTARTS   AGE
I0516 00:24:01.376] valid-pod   0/1     Pending   0          1s
I0516 00:24:01.376] STATUS      REASON          MESSAGE
I0516 00:24:01.377] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 00:24:01.377] has:valid-pod
I0516 00:24:02.469] Successful
I0516 00:24:02.469] message:pod/valid-pod
I0516 00:24:02.469] has not:STATUS
I0516 00:24:02.472] Successful
I0516 00:24:02.472] message:pod/valid-pod
... skipping 142 lines ...
I0516 00:24:03.577]   terminationGracePeriodSeconds: 30
I0516 00:24:03.577] status:
I0516 00:24:03.577]   phase: Pending
I0516 00:24:03.577]   qosClass: Guaranteed
I0516 00:24:03.577] has:name: valid-pod
I0516 00:24:03.664] Successful
I0516 00:24:03.664] message:Error from server (NotFound): pods "invalid-pod" not found
I0516 00:24:03.664] has:"invalid-pod" not found
I0516 00:24:03.748] pod "valid-pod" deleted
I0516 00:24:03.861] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:24:04.066] (Bpod/redis-master created
I0516 00:24:04.071] pod/valid-pod created
I0516 00:24:04.202] Successful
... skipping 283 lines ...
I0516 00:24:10.510] +++ command: run_kubectl_exec_pod_tests
I0516 00:24:10.525] +++ [0516 00:24:10] Creating namespace namespace-1557966250-11345
I0516 00:24:10.602] namespace/namespace-1557966250-11345 created
I0516 00:24:10.683] Context "test" modified.
I0516 00:24:10.693] +++ [0516 00:24:10] Testing kubectl exec POD COMMAND
I0516 00:24:10.789] Successful
I0516 00:24:10.789] message:Error from server (NotFound): pods "abc" not found
I0516 00:24:10.789] has:pods "abc" not found
I0516 00:24:10.994] pod/test-pod created
I0516 00:24:11.113] Successful
I0516 00:24:11.113] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0516 00:24:11.114] has not:pods "test-pod" not found
I0516 00:24:11.115] Successful
I0516 00:24:11.115] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0516 00:24:11.115] has not:pod or type/name must be specified
I0516 00:24:11.195] pod "test-pod" deleted
I0516 00:24:11.225] +++ exit code: 0
I0516 00:24:11.271] Recording: run_kubectl_exec_resource_name_tests
I0516 00:24:11.271] Running command: run_kubectl_exec_resource_name_tests
I0516 00:24:11.302] 
... skipping 2 lines ...
I0516 00:24:11.312] +++ command: run_kubectl_exec_resource_name_tests
I0516 00:24:11.327] +++ [0516 00:24:11] Creating namespace namespace-1557966251-11402
I0516 00:24:11.401] namespace/namespace-1557966251-11402 created
I0516 00:24:11.481] Context "test" modified.
I0516 00:24:11.491] +++ [0516 00:24:11] Testing kubectl exec TYPE/NAME COMMAND
I0516 00:24:11.601] Successful
I0516 00:24:11.601] message:error: the server doesn't have a resource type "foo"
I0516 00:24:11.601] has:error:
I0516 00:24:11.692] Successful
I0516 00:24:11.692] message:Error from server (NotFound): deployments.extensions "bar" not found
I0516 00:24:11.692] has:"bar" not found
I0516 00:24:11.899] pod/test-pod created
I0516 00:24:12.133] replicaset.apps/frontend created
W0516 00:24:12.234] I0516 00:24:12.138032   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966251-11402", Name:"frontend", UID:"0e15def2-5cc0-458e-be7b-ce8a4b868847", APIVersion:"apps/v1", ResourceVersion:"827", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qwtqv
W0516 00:24:12.234] I0516 00:24:12.142786   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966251-11402", Name:"frontend", UID:"0e15def2-5cc0-458e-be7b-ce8a4b868847", APIVersion:"apps/v1", ResourceVersion:"827", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bllhb
W0516 00:24:12.234] I0516 00:24:12.143287   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966251-11402", Name:"frontend", UID:"0e15def2-5cc0-458e-be7b-ce8a4b868847", APIVersion:"apps/v1", ResourceVersion:"827", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-w4vgl
I0516 00:24:12.356] configmap/test-set-env-config created
I0516 00:24:12.460] Successful
I0516 00:24:12.461] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0516 00:24:12.461] has:not implemented
I0516 00:24:12.566] Successful
I0516 00:24:12.567] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0516 00:24:12.567] has not:not found
I0516 00:24:12.569] Successful
I0516 00:24:12.569] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0516 00:24:12.569] has not:pod or type/name must be specified
I0516 00:24:12.672] Successful
I0516 00:24:12.672] message:Error from server (BadRequest): pod frontend-bllhb does not have a host assigned
I0516 00:24:12.672] has not:not found
I0516 00:24:12.675] Successful
I0516 00:24:12.675] message:Error from server (BadRequest): pod frontend-bllhb does not have a host assigned
I0516 00:24:12.675] has not:pod or type/name must be specified
I0516 00:24:12.757] pod "test-pod" deleted
I0516 00:24:12.846] replicaset.extensions "frontend" deleted
I0516 00:24:12.935] configmap "test-set-env-config" deleted
I0516 00:24:12.964] +++ exit code: 0
I0516 00:24:13.010] Recording: run_create_secret_tests
I0516 00:24:13.010] Running command: run_create_secret_tests
I0516 00:24:13.039] 
I0516 00:24:13.042] +++ Running case: test-cmd.run_create_secret_tests 
I0516 00:24:13.045] +++ working dir: /go/src/k8s.io/kubernetes
I0516 00:24:13.048] +++ command: run_create_secret_tests
I0516 00:24:13.158] Successful
I0516 00:24:13.158] message:Error from server (NotFound): secrets "mysecret" not found
I0516 00:24:13.159] has:secrets "mysecret" not found
I0516 00:24:13.342] Successful
I0516 00:24:13.342] message:Error from server (NotFound): secrets "mysecret" not found
I0516 00:24:13.343] has:secrets "mysecret" not found
I0516 00:24:13.345] Successful
I0516 00:24:13.345] message:user-specified
I0516 00:24:13.345] has:user-specified
I0516 00:24:13.428] Successful
I0516 00:24:13.516] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"d94033be-b849-421c-82d0-f1b43a9818e3","resourceVersion":"847","creationTimestamp":"2019-05-16T00:24:13Z"}}
... skipping 164 lines ...
I0516 00:24:16.727] valid-pod   0/1     Pending   0          0s
I0516 00:24:16.727] has:valid-pod
I0516 00:24:17.820] Successful
I0516 00:24:17.820] message:NAME        READY   STATUS    RESTARTS   AGE
I0516 00:24:17.821] valid-pod   0/1     Pending   0          0s
I0516 00:24:17.821] STATUS      REASON          MESSAGE
I0516 00:24:17.821] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 00:24:17.821] has:Timeout exceeded while reading body
I0516 00:24:17.915] Successful
I0516 00:24:17.915] message:NAME        READY   STATUS    RESTARTS   AGE
I0516 00:24:17.915] valid-pod   0/1     Pending   0          1s
I0516 00:24:17.916] has:valid-pod
I0516 00:24:17.993] Successful
I0516 00:24:17.994] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0516 00:24:17.994] has:Invalid timeout value
I0516 00:24:18.073] pod "valid-pod" deleted
I0516 00:24:18.102] +++ exit code: 0
I0516 00:24:18.151] Recording: run_crd_tests
I0516 00:24:18.151] Running command: run_crd_tests
I0516 00:24:18.179] 
... skipping 237 lines ...
I0516 00:24:23.059] foo.company.com/test patched
I0516 00:24:23.152] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0516 00:24:23.233] (Bfoo.company.com/test patched
I0516 00:24:23.325] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0516 00:24:23.406] (Bfoo.company.com/test patched
I0516 00:24:23.498] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0516 00:24:23.653] (B+++ [0516 00:24:23] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0516 00:24:23.715] {
I0516 00:24:23.715]     "apiVersion": "company.com/v1",
I0516 00:24:23.715]     "kind": "Foo",
I0516 00:24:23.715]     "metadata": {
I0516 00:24:23.715]         "annotations": {
I0516 00:24:23.716]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 334 lines ...
I0516 00:24:33.552] (Bnamespace/non-native-resources created
I0516 00:24:33.811] bar.company.com/test created
I0516 00:24:33.997] crd.sh:456: Successful get bars {{len .items}}: 1
I0516 00:24:34.123] (Bnamespace "non-native-resources" deleted
I0516 00:24:39.384] crd.sh:459: Successful get bars {{len .items}}: 0
I0516 00:24:39.574] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0516 00:24:39.675] Error from server (NotFound): namespaces "non-native-resources" not found
I0516 00:24:39.775] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0516 00:24:39.808] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0516 00:24:39.923] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0516 00:24:39.974] +++ exit code: 0
I0516 00:24:40.067] Recording: run_cmd_with_img_tests
I0516 00:24:40.068] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0516 00:24:40.413] I0516 00:24:40.413153   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966280-19161", Name:"test1-7b9c75bcb9", UID:"d063fcc0-3e81-432a-a8c9-5dcb9bef2746", APIVersion:"apps/v1", ResourceVersion:"1000", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-7b9c75bcb9-scsf7
I0516 00:24:40.514] Successful
I0516 00:24:40.514] message:deployment.apps/test1 created
I0516 00:24:40.514] has:deployment.apps/test1 created
I0516 00:24:40.515] deployment.extensions "test1" deleted
I0516 00:24:40.591] Successful
I0516 00:24:40.592] message:error: Invalid image name "InvalidImageName": invalid reference format
I0516 00:24:40.592] has:error: Invalid image name "InvalidImageName": invalid reference format
I0516 00:24:40.612] +++ exit code: 0
I0516 00:24:40.668] +++ [0516 00:24:40] Testing recursive resources
I0516 00:24:40.676] +++ [0516 00:24:40] Creating namespace namespace-1557966280-19262
I0516 00:24:40.753] namespace/namespace-1557966280-19262 created
I0516 00:24:40.835] Context "test" modified.
I0516 00:24:40.945] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:24:41.274] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:41.278] (BSuccessful
I0516 00:24:41.278] message:pod/busybox0 created
I0516 00:24:41.278] pod/busybox1 created
I0516 00:24:41.278] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0516 00:24:41.278] has:error validating data: kind not set
I0516 00:24:41.380] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:41.567] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0516 00:24:41.569] (BSuccessful
I0516 00:24:41.569] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:24:41.570] has:Object 'Kind' is missing
I0516 00:24:41.665] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:42.009] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0516 00:24:42.012] (BSuccessful
I0516 00:24:42.013] message:pod/busybox0 replaced
I0516 00:24:42.013] pod/busybox1 replaced
I0516 00:24:42.013] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0516 00:24:42.013] has:error validating data: kind not set
I0516 00:24:42.123] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:42.244] (BSuccessful
I0516 00:24:42.245] message:Name:         busybox0
I0516 00:24:42.245] Namespace:    namespace-1557966280-19262
I0516 00:24:42.245] Priority:     0
I0516 00:24:42.245] Node:         <none>
... skipping 153 lines ...
I0516 00:24:42.263] has:Object 'Kind' is missing
I0516 00:24:42.355] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:42.557] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0516 00:24:42.559] (BSuccessful
I0516 00:24:42.560] message:pod/busybox0 annotated
I0516 00:24:42.560] pod/busybox1 annotated
I0516 00:24:42.560] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:24:42.560] has:Object 'Kind' is missing
I0516 00:24:42.660] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:42.989] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0516 00:24:42.991] (BSuccessful
I0516 00:24:42.992] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0516 00:24:42.992] pod/busybox0 configured
I0516 00:24:42.992] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0516 00:24:42.992] pod/busybox1 configured
I0516 00:24:42.992] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0516 00:24:42.992] has:error validating data: kind not set
I0516 00:24:43.093] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:24:43.289] (Bdeployment.apps/nginx created
W0516 00:24:43.390] I0516 00:24:43.295141   50880 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557966280-19262", Name:"nginx", UID:"9c99c3f3-4025-4a9f-a04d-888ccc0a06f9", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-958dc566b to 3
W0516 00:24:43.390] I0516 00:24:43.298973   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966280-19262", Name:"nginx-958dc566b", UID:"c32b2a92-cc82-436e-b9ef-190465f6efd0", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-76886
W0516 00:24:43.390] I0516 00:24:43.303057   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966280-19262", Name:"nginx-958dc566b", UID:"c32b2a92-cc82-436e-b9ef-190465f6efd0", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-xsdcf
W0516 00:24:43.391] I0516 00:24:43.305854   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966280-19262", Name:"nginx-958dc566b", UID:"c32b2a92-cc82-436e-b9ef-190465f6efd0", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-98s9q
... skipping 48 lines ...
W0516 00:24:43.886] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0516 00:24:43.987] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:44.102] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:44.105] (BSuccessful
I0516 00:24:44.105] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0516 00:24:44.105] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0516 00:24:44.106] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:24:44.106] has:Object 'Kind' is missing
I0516 00:24:44.214] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:44.314] (BSuccessful
I0516 00:24:44.315] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:24:44.315] has:busybox0:busybox1:
I0516 00:24:44.317] Successful
I0516 00:24:44.318] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:24:44.318] has:Object 'Kind' is missing
I0516 00:24:44.418] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:44.521] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:24:44.625] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0516 00:24:44.628] (BSuccessful
I0516 00:24:44.628] message:pod/busybox0 labeled
I0516 00:24:44.629] pod/busybox1 labeled
I0516 00:24:44.629] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:24:44.629] has:Object 'Kind' is missing
I0516 00:24:44.730] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:44.830] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
W0516 00:24:44.931] I0516 00:24:44.255650   50880 namespace_controller.go:171] Namespace has been deleted non-native-resources
I0516 00:24:45.032] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0516 00:24:45.032] (BSuccessful
I0516 00:24:45.033] message:pod/busybox0 patched
I0516 00:24:45.033] pod/busybox1 patched
I0516 00:24:45.033] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:24:45.034] has:Object 'Kind' is missing
I0516 00:24:45.060] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:45.262] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:24:45.265] (BSuccessful
I0516 00:24:45.265] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0516 00:24:45.266] pod "busybox0" force deleted
I0516 00:24:45.266] pod "busybox1" force deleted
I0516 00:24:45.266] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:24:45.266] has:Object 'Kind' is missing
I0516 00:24:45.369] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:24:45.576] (Breplicationcontroller/busybox0 created
I0516 00:24:45.581] replicationcontroller/busybox1 created
W0516 00:24:45.682] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0516 00:24:45.682] I0516 00:24:45.581650   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557966280-19262", Name:"busybox0", UID:"73a21a31-27a7-4ff5-8b87-576090f51cb2", APIVersion:"v1", ResourceVersion:"1055", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-cpgwq
W0516 00:24:45.682] I0516 00:24:45.585789   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557966280-19262", Name:"busybox1", UID:"78e308fa-d6dd-48c9-af28-9944d9ed522c", APIVersion:"v1", ResourceVersion:"1057", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-qc7sv
I0516 00:24:45.783] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:45.805] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:45.902] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0516 00:24:46.001] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0516 00:24:46.196] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0516 00:24:46.296] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0516 00:24:46.299] (BSuccessful
I0516 00:24:46.299] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0516 00:24:46.300] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0516 00:24:46.300] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:24:46.300] has:Object 'Kind' is missing
I0516 00:24:46.382] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0516 00:24:46.474] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0516 00:24:46.588] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:46.687] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0516 00:24:46.791] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0516 00:24:47.008] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0516 00:24:47.111] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0516 00:24:47.114] (BSuccessful
I0516 00:24:47.114] message:service/busybox0 exposed
I0516 00:24:47.114] service/busybox1 exposed
I0516 00:24:47.115] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:24:47.115] has:Object 'Kind' is missing
I0516 00:24:47.234] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:47.337] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0516 00:24:47.441] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0516 00:24:47.665] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0516 00:24:47.770] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0516 00:24:47.773] (BSuccessful
I0516 00:24:47.773] message:replicationcontroller/busybox0 scaled
I0516 00:24:47.773] replicationcontroller/busybox1 scaled
I0516 00:24:47.773] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:24:47.774] has:Object 'Kind' is missing
I0516 00:24:47.880] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:48.082] (Bgeneric-resources.sh:381: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:24:48.086] (BSuccessful
I0516 00:24:48.086] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0516 00:24:48.086] replicationcontroller "busybox0" force deleted
I0516 00:24:48.086] replicationcontroller "busybox1" force deleted
I0516 00:24:48.087] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:24:48.087] has:Object 'Kind' is missing
I0516 00:24:48.188] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:24:48.386] (Bdeployment.apps/nginx1-deployment created
I0516 00:24:48.391] deployment.apps/nginx0-deployment created
W0516 00:24:48.492] I0516 00:24:47.546677   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557966280-19262", Name:"busybox0", UID:"73a21a31-27a7-4ff5-8b87-576090f51cb2", APIVersion:"v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-g8rdk
W0516 00:24:48.493] I0516 00:24:47.555785   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557966280-19262", Name:"busybox1", UID:"78e308fa-d6dd-48c9-af28-9944d9ed522c", APIVersion:"v1", ResourceVersion:"1079", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-jr76m
W0516 00:24:48.494] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0516 00:24:48.494] I0516 00:24:48.391743   50880 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557966280-19262", Name:"nginx1-deployment", UID:"b26748d9-2870-46b7-bdda-6bc18e741f4c", APIVersion:"apps/v1", ResourceVersion:"1098", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-67c99bcc6b to 2
W0516 00:24:48.494] I0516 00:24:48.395663   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966280-19262", Name:"nginx1-deployment-67c99bcc6b", UID:"507ccbfe-fdd8-4b6d-ad32-e9040146f373", APIVersion:"apps/v1", ResourceVersion:"1100", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-67c99bcc6b-tfzns
W0516 00:24:48.495] I0516 00:24:48.400458   50880 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557966280-19262", Name:"nginx0-deployment", UID:"571dc9c0-8cc1-4bed-b0fb-a01d718df0cb", APIVersion:"apps/v1", ResourceVersion:"1099", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-5886cf98fc to 2
W0516 00:24:48.495] I0516 00:24:48.400958   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966280-19262", Name:"nginx1-deployment-67c99bcc6b", UID:"507ccbfe-fdd8-4b6d-ad32-e9040146f373", APIVersion:"apps/v1", ResourceVersion:"1100", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-67c99bcc6b-9zp4s
W0516 00:24:48.496] I0516 00:24:48.405512   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966280-19262", Name:"nginx0-deployment-5886cf98fc", UID:"3d69e6cc-bea8-4153-bf85-e67439a7f684", APIVersion:"apps/v1", ResourceVersion:"1104", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-5886cf98fc-8tl77
W0516 00:24:48.496] I0516 00:24:48.410342   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966280-19262", Name:"nginx0-deployment-5886cf98fc", UID:"3d69e6cc-bea8-4153-bf85-e67439a7f684", APIVersion:"apps/v1", ResourceVersion:"1104", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-5886cf98fc-9ltxd
I0516 00:24:48.597] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0516 00:24:48.634] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0516 00:24:48.870] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0516 00:24:48.872] (BSuccessful
I0516 00:24:48.873] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0516 00:24:48.873] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0516 00:24:48.873] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 00:24:48.873] has:Object 'Kind' is missing
I0516 00:24:48.979] deployment.apps/nginx1-deployment paused
I0516 00:24:48.985] deployment.apps/nginx0-deployment paused
I0516 00:24:49.113] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0516 00:24:49.116] (BSuccessful
I0516 00:24:49.116] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0516 00:24:49.499] 1         <none>
I0516 00:24:49.499] 
I0516 00:24:49.499] deployment.apps/nginx0-deployment 
I0516 00:24:49.499] REVISION  CHANGE-CAUSE
I0516 00:24:49.499] 1         <none>
I0516 00:24:49.499] 
I0516 00:24:49.500] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 00:24:49.500] has:nginx0-deployment
I0516 00:24:49.503] Successful
I0516 00:24:49.504] message:deployment.apps/nginx1-deployment 
I0516 00:24:49.504] REVISION  CHANGE-CAUSE
I0516 00:24:49.504] 1         <none>
I0516 00:24:49.504] 
I0516 00:24:49.504] deployment.apps/nginx0-deployment 
I0516 00:24:49.504] REVISION  CHANGE-CAUSE
I0516 00:24:49.504] 1         <none>
I0516 00:24:49.504] 
I0516 00:24:49.505] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 00:24:49.505] has:nginx1-deployment
I0516 00:24:49.507] Successful
I0516 00:24:49.507] message:deployment.apps/nginx1-deployment 
I0516 00:24:49.507] REVISION  CHANGE-CAUSE
I0516 00:24:49.507] 1         <none>
I0516 00:24:49.507] 
I0516 00:24:49.507] deployment.apps/nginx0-deployment 
I0516 00:24:49.507] REVISION  CHANGE-CAUSE
I0516 00:24:49.507] 1         <none>
I0516 00:24:49.507] 
I0516 00:24:49.508] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 00:24:49.508] has:Object 'Kind' is missing
I0516 00:24:49.588] deployment.apps "nginx1-deployment" force deleted
I0516 00:24:49.593] deployment.apps "nginx0-deployment" force deleted
W0516 00:24:49.694] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0516 00:24:49.695] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 00:24:50.711] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:24:50.926] (Breplicationcontroller/busybox0 created
I0516 00:24:50.930] replicationcontroller/busybox1 created
W0516 00:24:51.031] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0516 00:24:51.032] I0516 00:24:50.930415   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557966280-19262", Name:"busybox0", UID:"de408c4e-18c7-40da-9ee8-6ee1bdd66365", APIVersion:"v1", ResourceVersion:"1147", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-k8z7j
W0516 00:24:51.032] I0516 00:24:50.936808   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557966280-19262", Name:"busybox1", UID:"a2b16d59-42d2-4a96-a1aa-5240e0c9206a", APIVersion:"v1", ResourceVersion:"1149", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-bjrrt
I0516 00:24:51.133] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:24:51.159] (BSuccessful
I0516 00:24:51.160] message:no rollbacker has been implemented for "ReplicationController"
I0516 00:24:51.160] no rollbacker has been implemented for "ReplicationController"
... skipping 3 lines ...
I0516 00:24:51.162] message:no rollbacker has been implemented for "ReplicationController"
I0516 00:24:51.162] no rollbacker has been implemented for "ReplicationController"
I0516 00:24:51.163] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:24:51.163] has:Object 'Kind' is missing
I0516 00:24:51.262] Successful
I0516 00:24:51.263] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:24:51.263] error: replicationcontrollers "busybox0" pausing is not supported
I0516 00:24:51.263] error: replicationcontrollers "busybox1" pausing is not supported
I0516 00:24:51.263] has:Object 'Kind' is missing
I0516 00:24:51.265] Successful
I0516 00:24:51.265] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:24:51.265] error: replicationcontrollers "busybox0" pausing is not supported
I0516 00:24:51.265] error: replicationcontrollers "busybox1" pausing is not supported
I0516 00:24:51.265] has:replicationcontrollers "busybox0" pausing is not supported
I0516 00:24:51.268] Successful
I0516 00:24:51.268] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:24:51.268] error: replicationcontrollers "busybox0" pausing is not supported
I0516 00:24:51.269] error: replicationcontrollers "busybox1" pausing is not supported
I0516 00:24:51.269] has:replicationcontrollers "busybox1" pausing is not supported
I0516 00:24:51.377] Successful
I0516 00:24:51.378] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:24:51.378] error: replicationcontrollers "busybox0" resuming is not supported
I0516 00:24:51.378] error: replicationcontrollers "busybox1" resuming is not supported
I0516 00:24:51.378] has:Object 'Kind' is missing
I0516 00:24:51.380] Successful
I0516 00:24:51.380] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:24:51.380] error: replicationcontrollers "busybox0" resuming is not supported
I0516 00:24:51.380] error: replicationcontrollers "busybox1" resuming is not supported
I0516 00:24:51.381] has:replicationcontrollers "busybox0" resuming is not supported
I0516 00:24:51.383] Successful
I0516 00:24:51.384] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:24:51.384] error: replicationcontrollers "busybox0" resuming is not supported
I0516 00:24:51.384] error: replicationcontrollers "busybox1" resuming is not supported
I0516 00:24:51.384] has:replicationcontrollers "busybox0" resuming is not supported
I0516 00:24:51.463] replicationcontroller "busybox0" force deleted
I0516 00:24:51.468] replicationcontroller "busybox1" force deleted
W0516 00:24:51.569] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0516 00:24:51.569] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:24:52.481] Recording: run_namespace_tests
I0516 00:24:52.481] Running command: run_namespace_tests
I0516 00:24:52.510] 
I0516 00:24:52.512] +++ Running case: test-cmd.run_namespace_tests 
I0516 00:24:52.515] +++ working dir: /go/src/k8s.io/kubernetes
I0516 00:24:52.519] +++ command: run_namespace_tests
... skipping 4 lines ...
W0516 00:24:56.563] I0516 00:24:56.563096   50880 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0516 00:24:56.664] I0516 00:24:56.663593   50880 controller_utils.go:1036] Caches are synced for resource quota controller
W0516 00:24:57.070] I0516 00:24:57.070072   50880 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0516 00:24:57.171] I0516 00:24:57.170533   50880 controller_utils.go:1036] Caches are synced for garbage collector controller
I0516 00:24:57.906] namespace/my-namespace condition met
I0516 00:24:58.005] Successful
I0516 00:24:58.005] message:Error from server (NotFound): namespaces "my-namespace" not found
I0516 00:24:58.005] has: not found
I0516 00:24:58.077] namespace/my-namespace created
I0516 00:24:58.205] core.sh:1330: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0516 00:24:58.431] (BSuccessful
I0516 00:24:58.431] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0516 00:24:58.431] namespace "kube-node-lease" deleted
... skipping 30 lines ...
I0516 00:24:58.434] namespace "namespace-1557966254-3031" deleted
I0516 00:24:58.434] namespace "namespace-1557966255-23523" deleted
I0516 00:24:58.434] namespace "namespace-1557966258-25285" deleted
I0516 00:24:58.434] namespace "namespace-1557966259-6728" deleted
I0516 00:24:58.434] namespace "namespace-1557966280-19161" deleted
I0516 00:24:58.434] namespace "namespace-1557966280-19262" deleted
I0516 00:24:58.435] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0516 00:24:58.435] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0516 00:24:58.435] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0516 00:24:58.435] has:warning: deleting cluster-scoped resources
I0516 00:24:58.435] Successful
I0516 00:24:58.436] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0516 00:24:58.436] namespace "kube-node-lease" deleted
I0516 00:24:58.436] namespace "my-namespace" deleted
I0516 00:24:58.436] namespace "namespace-1557966145-4896" deleted
... skipping 28 lines ...
I0516 00:24:58.439] namespace "namespace-1557966254-3031" deleted
I0516 00:24:58.439] namespace "namespace-1557966255-23523" deleted
I0516 00:24:58.439] namespace "namespace-1557966258-25285" deleted
I0516 00:24:58.439] namespace "namespace-1557966259-6728" deleted
I0516 00:24:58.439] namespace "namespace-1557966280-19161" deleted
I0516 00:24:58.439] namespace "namespace-1557966280-19262" deleted
I0516 00:24:58.439] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0516 00:24:58.439] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0516 00:24:58.439] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0516 00:24:58.440] has:namespace "my-namespace" deleted
I0516 00:24:58.549] core.sh:1342: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0516 00:24:58.628] (Bnamespace/other created
I0516 00:24:58.736] core.sh:1346: Successful get namespaces/other {{.metadata.name}}: other
I0516 00:24:58.843] (Bcore.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:24:59.052] (Bpod/valid-pod created
I0516 00:24:59.177] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:24:59.280] (Bcore.sh:1356: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:24:59.380] (BSuccessful
I0516 00:24:59.380] message:error: a resource cannot be retrieved by name across all namespaces
I0516 00:24:59.380] has:a resource cannot be retrieved by name across all namespaces
I0516 00:24:59.488] core.sh:1363: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:24:59.575] (Bpod "valid-pod" force deleted
W0516 00:24:59.676] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0516 00:24:59.777] core.sh:1367: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:24:59.777] (Bnamespace "other" deleted
... skipping 151 lines ...
I0516 00:25:20.214] +++ command: run_client_config_tests
I0516 00:25:20.232] +++ [0516 00:25:20] Creating namespace namespace-1557966320-10317
I0516 00:25:20.320] namespace/namespace-1557966320-10317 created
I0516 00:25:20.394] Context "test" modified.
I0516 00:25:20.404] +++ [0516 00:25:20] Testing client config
I0516 00:25:20.482] Successful
I0516 00:25:20.482] message:error: stat missing: no such file or directory
I0516 00:25:20.482] has:missing: no such file or directory
I0516 00:25:20.563] Successful
I0516 00:25:20.564] message:error: stat missing: no such file or directory
I0516 00:25:20.564] has:missing: no such file or directory
I0516 00:25:20.641] Successful
I0516 00:25:20.641] message:error: stat missing: no such file or directory
I0516 00:25:20.641] has:missing: no such file or directory
I0516 00:25:20.726] Successful
I0516 00:25:20.726] message:Error in configuration: context was not found for specified context: missing-context
I0516 00:25:20.726] has:context was not found for specified context: missing-context
I0516 00:25:20.809] Successful
I0516 00:25:20.810] message:error: no server found for cluster "missing-cluster"
I0516 00:25:20.810] has:no server found for cluster "missing-cluster"
I0516 00:25:20.887] Successful
I0516 00:25:20.887] message:error: auth info "missing-user" does not exist
I0516 00:25:20.887] has:auth info "missing-user" does not exist
I0516 00:25:21.043] Successful
I0516 00:25:21.043] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0516 00:25:21.043] has:Error loading config file
I0516 00:25:21.123] Successful
I0516 00:25:21.123] message:error: stat missing-config: no such file or directory
I0516 00:25:21.124] has:no such file or directory
I0516 00:25:21.143] +++ exit code: 0
I0516 00:25:21.233] Recording: run_service_accounts_tests
I0516 00:25:21.233] Running command: run_service_accounts_tests
I0516 00:25:21.259] 
I0516 00:25:21.262] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 35 lines ...
I0516 00:25:28.291] Labels:                        run=pi
I0516 00:25:28.291] Annotations:                   <none>
I0516 00:25:28.291] Schedule:                      59 23 31 2 *
I0516 00:25:28.291] Concurrency Policy:            Allow
I0516 00:25:28.291] Suspend:                       False
I0516 00:25:28.292] Successful Job History Limit:  3
I0516 00:25:28.292] Failed Job History Limit:      1
I0516 00:25:28.292] Starting Deadline Seconds:     <unset>
I0516 00:25:28.292] Selector:                      <unset>
I0516 00:25:28.292] Parallelism:                   <unset>
I0516 00:25:28.292] Completions:                   <unset>
I0516 00:25:28.292] Pod Template:
I0516 00:25:28.292]   Labels:  run=pi
... skipping 33 lines ...
I0516 00:25:28.842]                 run=pi
I0516 00:25:28.842] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0516 00:25:28.842] Controlled By:  CronJob/pi
I0516 00:25:28.842] Parallelism:    1
I0516 00:25:28.842] Completions:    1
I0516 00:25:28.842] Start Time:     Thu, 16 May 2019 00:25:28 +0000
I0516 00:25:28.842] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0516 00:25:28.843] Pod Template:
I0516 00:25:28.843]   Labels:  controller-uid=493a6464-6e0e-4abf-bf1e-fa55e91a32a4
I0516 00:25:28.843]            job-name=test-job
I0516 00:25:28.843]            run=pi
I0516 00:25:28.843]   Containers:
I0516 00:25:28.843]    pi:
... skipping 388 lines ...
I0516 00:25:39.359]     role: padawan
I0516 00:25:39.360]   sessionAffinity: None
I0516 00:25:39.360]   type: ClusterIP
I0516 00:25:39.360] status:
I0516 00:25:39.360]   loadBalancer: {}
W0516 00:25:39.460] I0516 00:25:39.258667   50880 namespace_controller.go:171] Namespace has been deleted test-jobs
W0516 00:25:39.461] error: you must specify resources by --filename when --local is set.
W0516 00:25:39.461] Example resource specifications include:
W0516 00:25:39.461]    '-f rsrc.yaml'
W0516 00:25:39.461]    '--filename=rsrc.json'
I0516 00:25:39.565] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0516 00:25:39.776] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0516 00:25:39.866] (Bservice "redis-master" deleted
... skipping 107 lines ...
I0516 00:25:47.790] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0516 00:25:47.895] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0516 00:25:48.003] (Bdaemonset.extensions/bind rolled back
I0516 00:25:48.126] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0516 00:25:48.242] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0516 00:25:48.365] (BSuccessful
I0516 00:25:48.365] message:error: unable to find specified revision 1000000 in history
I0516 00:25:48.365] has:unable to find specified revision
I0516 00:25:48.470] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0516 00:25:48.572] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0516 00:25:48.687] (Bdaemonset.extensions/bind rolled back
I0516 00:25:48.813] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0516 00:25:48.932] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 28 lines ...
I0516 00:25:50.589] Namespace:    namespace-1557966349-5056
I0516 00:25:50.590] Selector:     app=guestbook,tier=frontend
I0516 00:25:50.590] Labels:       app=guestbook
I0516 00:25:50.590]               tier=frontend
I0516 00:25:50.590] Annotations:  <none>
I0516 00:25:50.590] Replicas:     3 current / 3 desired
I0516 00:25:50.590] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:25:50.590] Pod Template:
I0516 00:25:50.590]   Labels:  app=guestbook
I0516 00:25:50.590]            tier=frontend
I0516 00:25:50.590]   Containers:
I0516 00:25:50.590]    php-redis:
I0516 00:25:50.590]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0516 00:25:50.711] Namespace:    namespace-1557966349-5056
I0516 00:25:50.712] Selector:     app=guestbook,tier=frontend
I0516 00:25:50.712] Labels:       app=guestbook
I0516 00:25:50.712]               tier=frontend
I0516 00:25:50.712] Annotations:  <none>
I0516 00:25:50.712] Replicas:     3 current / 3 desired
I0516 00:25:50.712] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:25:50.712] Pod Template:
I0516 00:25:50.712]   Labels:  app=guestbook
I0516 00:25:50.712]            tier=frontend
I0516 00:25:50.712]   Containers:
I0516 00:25:50.712]    php-redis:
I0516 00:25:50.712]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0516 00:25:50.840] Namespace:    namespace-1557966349-5056
I0516 00:25:50.840] Selector:     app=guestbook,tier=frontend
I0516 00:25:50.840] Labels:       app=guestbook
I0516 00:25:50.840]               tier=frontend
I0516 00:25:50.840] Annotations:  <none>
I0516 00:25:50.840] Replicas:     3 current / 3 desired
I0516 00:25:50.840] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:25:50.840] Pod Template:
I0516 00:25:50.840]   Labels:  app=guestbook
I0516 00:25:50.841]            tier=frontend
I0516 00:25:50.841]   Containers:
I0516 00:25:50.841]    php-redis:
I0516 00:25:50.841]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0516 00:25:50.968] Namespace:    namespace-1557966349-5056
I0516 00:25:50.968] Selector:     app=guestbook,tier=frontend
I0516 00:25:50.968] Labels:       app=guestbook
I0516 00:25:50.968]               tier=frontend
I0516 00:25:50.968] Annotations:  <none>
I0516 00:25:50.968] Replicas:     3 current / 3 desired
I0516 00:25:50.969] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:25:50.969] Pod Template:
I0516 00:25:50.969]   Labels:  app=guestbook
I0516 00:25:50.969]            tier=frontend
I0516 00:25:50.969]   Containers:
I0516 00:25:50.969]    php-redis:
I0516 00:25:50.969]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0516 00:25:51.137] Namespace:    namespace-1557966349-5056
I0516 00:25:51.137] Selector:     app=guestbook,tier=frontend
I0516 00:25:51.137] Labels:       app=guestbook
I0516 00:25:51.137]               tier=frontend
I0516 00:25:51.137] Annotations:  <none>
I0516 00:25:51.137] Replicas:     3 current / 3 desired
I0516 00:25:51.137] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:25:51.137] Pod Template:
I0516 00:25:51.137]   Labels:  app=guestbook
I0516 00:25:51.137]            tier=frontend
I0516 00:25:51.137]   Containers:
I0516 00:25:51.138]    php-redis:
I0516 00:25:51.138]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0516 00:25:51.259] Namespace:    namespace-1557966349-5056
I0516 00:25:51.259] Selector:     app=guestbook,tier=frontend
I0516 00:25:51.259] Labels:       app=guestbook
I0516 00:25:51.259]               tier=frontend
I0516 00:25:51.259] Annotations:  <none>
I0516 00:25:51.259] Replicas:     3 current / 3 desired
I0516 00:25:51.259] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:25:51.259] Pod Template:
I0516 00:25:51.259]   Labels:  app=guestbook
I0516 00:25:51.260]            tier=frontend
I0516 00:25:51.260]   Containers:
I0516 00:25:51.260]    php-redis:
I0516 00:25:51.260]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0516 00:25:51.377] Namespace:    namespace-1557966349-5056
I0516 00:25:51.377] Selector:     app=guestbook,tier=frontend
I0516 00:25:51.377] Labels:       app=guestbook
I0516 00:25:51.377]               tier=frontend
I0516 00:25:51.377] Annotations:  <none>
I0516 00:25:51.377] Replicas:     3 current / 3 desired
I0516 00:25:51.378] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:25:51.378] Pod Template:
I0516 00:25:51.378]   Labels:  app=guestbook
I0516 00:25:51.378]            tier=frontend
I0516 00:25:51.378]   Containers:
I0516 00:25:51.378]    php-redis:
I0516 00:25:51.378]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0516 00:25:51.502] Namespace:    namespace-1557966349-5056
I0516 00:25:51.502] Selector:     app=guestbook,tier=frontend
I0516 00:25:51.502] Labels:       app=guestbook
I0516 00:25:51.502]               tier=frontend
I0516 00:25:51.502] Annotations:  <none>
I0516 00:25:51.502] Replicas:     3 current / 3 desired
I0516 00:25:51.502] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:25:51.502] Pod Template:
I0516 00:25:51.503]   Labels:  app=guestbook
I0516 00:25:51.503]            tier=frontend
I0516 00:25:51.503]   Containers:
I0516 00:25:51.503]    php-redis:
I0516 00:25:51.503]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
W0516 00:25:51.803] I0516 00:25:51.710259   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557966349-5056", Name:"frontend", UID:"55bc8d7e-3aef-4aaa-bc36-65b31fbe504c", APIVersion:"v1", ResourceVersion:"1670", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-7t8gr
I0516 00:25:51.904] core.sh:1071: Successful get rc frontend {{.spec.replicas}}: 2
I0516 00:25:51.953] (Bcore.sh:1075: Successful get rc frontend {{.spec.replicas}}: 2
I0516 00:25:52.165] (Bcore.sh:1079: Successful get rc frontend {{.spec.replicas}}: 2
I0516 00:25:52.269] (Bcore.sh:1083: Successful get rc frontend {{.spec.replicas}}: 2
I0516 00:25:52.366] (Breplicationcontroller/frontend scaled
W0516 00:25:52.467] error: Expected replicas to be 3, was 2
W0516 00:25:52.467] I0516 00:25:52.371832   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557966349-5056", Name:"frontend", UID:"55bc8d7e-3aef-4aaa-bc36-65b31fbe504c", APIVersion:"v1", ResourceVersion:"1677", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-l4dxq
I0516 00:25:52.568] core.sh:1087: Successful get rc frontend {{.spec.replicas}}: 3
I0516 00:25:52.588] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 3
I0516 00:25:52.680] (Breplicationcontroller/frontend scaled
I0516 00:25:52.786] core.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I0516 00:25:52.873] (Breplicationcontroller "frontend" deleted
... skipping 41 lines ...
I0516 00:25:55.183] service "expose-test-deployment" deleted
I0516 00:25:55.292] Successful
I0516 00:25:55.293] message:service/expose-test-deployment exposed
I0516 00:25:55.293] has:service/expose-test-deployment exposed
I0516 00:25:55.378] service "expose-test-deployment" deleted
I0516 00:25:55.480] Successful
I0516 00:25:55.480] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0516 00:25:55.480] See 'kubectl expose -h' for help and examples
I0516 00:25:55.480] has:invalid deployment: no selectors
I0516 00:25:55.571] Successful
I0516 00:25:55.571] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0516 00:25:55.571] See 'kubectl expose -h' for help and examples
I0516 00:25:55.572] has:invalid deployment: no selectors
I0516 00:25:55.768] deployment.apps/nginx-deployment created
W0516 00:25:55.869] I0516 00:25:55.773471   50880 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557966349-5056", Name:"nginx-deployment", UID:"d5c42670-c9ec-4455-a7f6-4c26d8417b9b", APIVersion:"apps/v1", ResourceVersion:"1798", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5cb597d4f to 3
W0516 00:25:55.869] I0516 00:25:55.777536   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966349-5056", Name:"nginx-deployment-5cb597d4f", UID:"74775214-9a0d-46d6-8fd7-7760cc6ff8c4", APIVersion:"apps/v1", ResourceVersion:"1799", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5cb597d4f-bm9cj
W0516 00:25:55.870] I0516 00:25:55.782834   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966349-5056", Name:"nginx-deployment-5cb597d4f", UID:"74775214-9a0d-46d6-8fd7-7760cc6ff8c4", APIVersion:"apps/v1", ResourceVersion:"1799", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5cb597d4f-b9g2l
... skipping 23 lines ...
I0516 00:25:57.980] service "frontend" deleted
I0516 00:25:57.989] service "frontend-2" deleted
I0516 00:25:57.996] service "frontend-3" deleted
I0516 00:25:58.003] service "frontend-4" deleted
I0516 00:25:58.010] service "frontend-5" deleted
I0516 00:25:58.113] Successful
I0516 00:25:58.113] message:error: cannot expose a Node
I0516 00:25:58.113] has:cannot expose
I0516 00:25:58.233] Successful
I0516 00:25:58.233] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0516 00:25:58.234] has:metadata.name: Invalid value
I0516 00:25:58.339] Successful
I0516 00:25:58.339] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0516 00:26:00.649] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0516 00:26:00.753] core.sh:1259: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0516 00:26:00.838] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0516 00:26:00.934] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0516 00:26:01.043] core.sh:1263: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0516 00:26:01.131] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0516 00:26:01.232] Error: required flag(s) "max" not set
W0516 00:26:01.232] 
W0516 00:26:01.232] 
W0516 00:26:01.232] Examples:
W0516 00:26:01.232]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0516 00:26:01.233]   kubectl autoscale deployment foo --min=2 --max=10
W0516 00:26:01.233]   
... skipping 55 lines ...
I0516 00:26:01.501]           limits:
I0516 00:26:01.501]             cpu: 300m
I0516 00:26:01.501]           requests:
I0516 00:26:01.501]             cpu: 300m
I0516 00:26:01.501]       terminationGracePeriodSeconds: 0
I0516 00:26:01.502] status: {}
W0516 00:26:01.602] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0516 00:26:01.800] deployment.apps/nginx-deployment-resources created
W0516 00:26:01.901] I0516 00:26:01.806527   50880 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557966349-5056", Name:"nginx-deployment-resources", UID:"34cc189b-e41b-454b-9dd7-b2321458f968", APIVersion:"apps/v1", ResourceVersion:"1939", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-865b6bb7c6 to 3
W0516 00:26:01.902] I0516 00:26:01.810699   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966349-5056", Name:"nginx-deployment-resources-865b6bb7c6", UID:"165a1ace-2221-4cbf-93d5-438933ac7d8b", APIVersion:"apps/v1", ResourceVersion:"1940", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-865b6bb7c6-bpj62
W0516 00:26:01.902] I0516 00:26:01.815036   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966349-5056", Name:"nginx-deployment-resources-865b6bb7c6", UID:"165a1ace-2221-4cbf-93d5-438933ac7d8b", APIVersion:"apps/v1", ResourceVersion:"1940", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-865b6bb7c6-zd86x
W0516 00:26:01.902] I0516 00:26:01.815621   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966349-5056", Name:"nginx-deployment-resources-865b6bb7c6", UID:"165a1ace-2221-4cbf-93d5-438933ac7d8b", APIVersion:"apps/v1", ResourceVersion:"1940", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-865b6bb7c6-jt2lw
I0516 00:26:02.003] core.sh:1278: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 2 lines ...
I0516 00:26:02.257] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0516 00:26:02.357] I0516 00:26:02.262342   50880 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557966349-5056", Name:"nginx-deployment-resources", UID:"34cc189b-e41b-454b-9dd7-b2321458f968", APIVersion:"apps/v1", ResourceVersion:"1954", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-69b4c96c9b to 1
W0516 00:26:02.358] I0516 00:26:02.267471   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966349-5056", Name:"nginx-deployment-resources-69b4c96c9b", UID:"76d0e689-7c8b-46ee-8368-b252e0f1d21f", APIVersion:"apps/v1", ResourceVersion:"1955", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69b4c96c9b-t8ljh
I0516 00:26:02.458] core.sh:1283: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0516 00:26:02.481] (Bcore.sh:1284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0516 00:26:02.692] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0516 00:26:02.793] error: unable to find container named redis
W0516 00:26:02.794] I0516 00:26:02.723624   50880 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557966349-5056", Name:"nginx-deployment-resources", UID:"34cc189b-e41b-454b-9dd7-b2321458f968", APIVersion:"apps/v1", ResourceVersion:"1964", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-865b6bb7c6 to 2
W0516 00:26:02.794] I0516 00:26:02.730503   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966349-5056", Name:"nginx-deployment-resources-865b6bb7c6", UID:"165a1ace-2221-4cbf-93d5-438933ac7d8b", APIVersion:"apps/v1", ResourceVersion:"1968", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-865b6bb7c6-jt2lw
W0516 00:26:02.794] I0516 00:26:02.766346   50880 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557966349-5056", Name:"nginx-deployment-resources", UID:"34cc189b-e41b-454b-9dd7-b2321458f968", APIVersion:"apps/v1", ResourceVersion:"1967", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-7bb7d84c58 to 1
W0516 00:26:02.794] I0516 00:26:02.776107   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966349-5056", Name:"nginx-deployment-resources-7bb7d84c58", UID:"44f5183d-b9f6-4444-b61f-5e6e097d5060", APIVersion:"apps/v1", ResourceVersion:"1974", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-7bb7d84c58-xwh4f
I0516 00:26:02.895] core.sh:1289: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0516 00:26:02.959] (Bcore.sh:1290: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
... skipping 211 lines ...
I0516 00:26:03.518]     status: "True"
I0516 00:26:03.518]     type: Progressing
I0516 00:26:03.518]   observedGeneration: 4
I0516 00:26:03.518]   replicas: 4
I0516 00:26:03.518]   unavailableReplicas: 4
I0516 00:26:03.518]   updatedReplicas: 1
W0516 00:26:03.619] error: you must specify resources by --filename when --local is set.
W0516 00:26:03.619] Example resource specifications include:
W0516 00:26:03.619]    '-f rsrc.yaml'
W0516 00:26:03.619]    '--filename=rsrc.json'
I0516 00:26:03.720] core.sh:1299: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0516 00:26:03.792] (Bcore.sh:1300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0516 00:26:03.902] (Bcore.sh:1301: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0516 00:26:05.600]                 pod-template-hash=75c7695cbd
I0516 00:26:05.600] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0516 00:26:05.601]                 deployment.kubernetes.io/max-replicas: 2
I0516 00:26:05.601]                 deployment.kubernetes.io/revision: 1
I0516 00:26:05.601] Controlled By:  Deployment/test-nginx-apps
I0516 00:26:05.601] Replicas:       1 current / 1 desired
I0516 00:26:05.601] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:05.601] Pod Template:
I0516 00:26:05.601]   Labels:  app=test-nginx-apps
I0516 00:26:05.601]            pod-template-hash=75c7695cbd
I0516 00:26:05.601]   Containers:
I0516 00:26:05.601]    nginx:
I0516 00:26:05.601]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 90 lines ...
I0516 00:26:10.451] (B    Image:	k8s.gcr.io/nginx:test-cmd
I0516 00:26:10.558] apps.sh:296: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0516 00:26:10.677] (Bdeployment.extensions/nginx rolled back
I0516 00:26:11.798] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0516 00:26:12.013] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0516 00:26:12.129] (Bdeployment.extensions/nginx rolled back
W0516 00:26:12.230] error: unable to find specified revision 1000000 in history
I0516 00:26:13.248] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0516 00:26:13.356] (Bdeployment.extensions/nginx paused
W0516 00:26:13.493] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
W0516 00:26:13.592] error: deployments.extensions "nginx" can't restart paused deployment (run rollout resume first)
I0516 00:26:13.702] deployment.extensions/nginx resumed
I0516 00:26:13.853] deployment.extensions/nginx rolled back
I0516 00:26:14.075]     deployment.kubernetes.io/revision-history: 1,3
W0516 00:26:14.267] error: desired revision (3) is different from the running revision (5)
I0516 00:26:14.377] deployment.extensions/nginx restarted
W0516 00:26:14.478] I0516 00:26:14.402634   50880 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557966364-17668", Name:"nginx", UID:"6762e886-bc05-41ea-ac84-a40a0f67cee3", APIVersion:"apps/v1", ResourceVersion:"2188", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-958dc566b to 2
W0516 00:26:14.478] I0516 00:26:14.410034   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966364-17668", Name:"nginx-958dc566b", UID:"12e1305a-d5c8-4d11-aee0-fee58dc8fed5", APIVersion:"apps/v1", ResourceVersion:"2192", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-958dc566b-sm7ff
W0516 00:26:14.479] I0516 00:26:14.422891   50880 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557966364-17668", Name:"nginx", UID:"6762e886-bc05-41ea-ac84-a40a0f67cee3", APIVersion:"apps/v1", ResourceVersion:"2191", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-645ff79588 to 1
W0516 00:26:14.479] I0516 00:26:14.429075   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966364-17668", Name:"nginx-645ff79588", UID:"1213041e-cb2b-4e07-b32c-12e205466959", APIVersion:"apps/v1", ResourceVersion:"2198", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-645ff79588-5shg5
I0516 00:26:15.602] Successful
... skipping 143 lines ...
I0516 00:26:16.796] (Bdeployment.extensions/nginx-deployment image updated
W0516 00:26:16.897] I0516 00:26:16.802290   50880 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557966364-17668", Name:"nginx-deployment", UID:"1557a0ba-5792-427e-a62d-bc24712842a0", APIVersion:"apps/v1", ResourceVersion:"2257", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-64f55cb875 to 1
W0516 00:26:16.898] I0516 00:26:16.806706   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966364-17668", Name:"nginx-deployment-64f55cb875", UID:"d5d6da0c-d59f-4c51-ac50-b4d0dd376676", APIVersion:"apps/v1", ResourceVersion:"2258", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64f55cb875-g86ss
I0516 00:26:16.998] apps.sh:345: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0516 00:26:17.032] (Bapps.sh:346: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0516 00:26:17.249] (Bdeployment.extensions/nginx-deployment image updated
W0516 00:26:17.350] error: unable to find container named "redis"
I0516 00:26:17.451] apps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0516 00:26:17.467] (Bapps.sh:352: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0516 00:26:17.569] (Bdeployment.apps/nginx-deployment image updated
I0516 00:26:17.688] apps.sh:355: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0516 00:26:17.800] (Bapps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0516 00:26:17.995] (Bapps.sh:359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
... skipping 62 lines ...
I0516 00:26:21.906] Context "test" modified.
I0516 00:26:21.916] +++ [0516 00:26:21] Testing kubectl(v1:replicasets)
I0516 00:26:22.017] apps.sh:510: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:26:22.207] (Breplicaset.apps/frontend created
I0516 00:26:22.233] +++ [0516 00:26:22] Deleting rs
I0516 00:26:22.325] replicaset.extensions "frontend" deleted
W0516 00:26:22.426] E0516 00:26:21.609634   50880 replica_set.go:450] Sync "namespace-1557966364-17668/nginx-deployment-5dd68b6c76" failed with replicasets.apps "nginx-deployment-5dd68b6c76" not found
W0516 00:26:22.426] E0516 00:26:21.659734   50880 replica_set.go:450] Sync "namespace-1557966364-17668/nginx-deployment-57b54775" failed with replicasets.apps "nginx-deployment-57b54775" not found
W0516 00:26:22.426] E0516 00:26:21.709471   50880 replica_set.go:450] Sync "namespace-1557966364-17668/nginx-deployment-5dfd5c49d4" failed with replicasets.apps "nginx-deployment-5dfd5c49d4" not found
W0516 00:26:22.426] E0516 00:26:21.759572   50880 replica_set.go:450] Sync "namespace-1557966364-17668/nginx-deployment-7d8bf5bf54" failed with replicasets.apps "nginx-deployment-7d8bf5bf54" not found
W0516 00:26:22.427] I0516 00:26:22.215587   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966381-14375", Name:"frontend", UID:"836520d8-3057-4ddf-9ef5-05793c9d3d85", APIVersion:"apps/v1", ResourceVersion:"2436", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-p8rkl
W0516 00:26:22.427] I0516 00:26:22.220677   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966381-14375", Name:"frontend", UID:"836520d8-3057-4ddf-9ef5-05793c9d3d85", APIVersion:"apps/v1", ResourceVersion:"2436", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vhcl9
W0516 00:26:22.427] I0516 00:26:22.221119   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966381-14375", Name:"frontend", UID:"836520d8-3057-4ddf-9ef5-05793c9d3d85", APIVersion:"apps/v1", ResourceVersion:"2436", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gfzv2
I0516 00:26:22.528] apps.sh:516: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:26:22.547] (Bapps.sh:520: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:26:22.752] (Breplicaset.apps/frontend-no-cascade created
W0516 00:26:22.852] I0516 00:26:22.757658   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966381-14375", Name:"frontend-no-cascade", UID:"3bbe2aec-69a0-4961-856a-840307824b5d", APIVersion:"apps/v1", ResourceVersion:"2452", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-sspcd
W0516 00:26:22.853] I0516 00:26:22.761597   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966381-14375", Name:"frontend-no-cascade", UID:"3bbe2aec-69a0-4961-856a-840307824b5d", APIVersion:"apps/v1", ResourceVersion:"2452", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-6dpbw
W0516 00:26:22.853] I0516 00:26:22.762388   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557966381-14375", Name:"frontend-no-cascade", UID:"3bbe2aec-69a0-4961-856a-840307824b5d", APIVersion:"apps/v1", ResourceVersion:"2452", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-z5sm9
I0516 00:26:22.954] apps.sh:526: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0516 00:26:22.954] (B+++ [0516 00:26:22] Deleting rs
I0516 00:26:22.964] replicaset.extensions "frontend-no-cascade" deleted
W0516 00:26:23.065] E0516 00:26:23.008964   50880 replica_set.go:450] Sync "namespace-1557966381-14375/frontend-no-cascade" failed with Operation cannot be fulfilled on replicasets.apps "frontend-no-cascade": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1557966381-14375/frontend-no-cascade, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 3bbe2aec-69a0-4961-856a-840307824b5d, UID in object meta: 
I0516 00:26:23.165] apps.sh:530: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:26:23.221] (Bapps.sh:532: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0516 00:26:23.311] (Bpod "frontend-no-cascade-6dpbw" deleted
I0516 00:26:23.317] pod "frontend-no-cascade-sspcd" deleted
I0516 00:26:23.324] pod "frontend-no-cascade-z5sm9" deleted
I0516 00:26:23.445] apps.sh:535: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 9 lines ...
I0516 00:26:24.041] Namespace:    namespace-1557966381-14375
I0516 00:26:24.041] Selector:     app=guestbook,tier=frontend
I0516 00:26:24.041] Labels:       app=guestbook
I0516 00:26:24.041]               tier=frontend
I0516 00:26:24.041] Annotations:  <none>
I0516 00:26:24.041] Replicas:     3 current / 3 desired
I0516 00:26:24.041] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:24.041] Pod Template:
I0516 00:26:24.041]   Labels:  app=guestbook
I0516 00:26:24.041]            tier=frontend
I0516 00:26:24.041]   Containers:
I0516 00:26:24.042]    php-redis:
I0516 00:26:24.042]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0516 00:26:24.182] Namespace:    namespace-1557966381-14375
I0516 00:26:24.182] Selector:     app=guestbook,tier=frontend
I0516 00:26:24.182] Labels:       app=guestbook
I0516 00:26:24.182]               tier=frontend
I0516 00:26:24.182] Annotations:  <none>
I0516 00:26:24.182] Replicas:     3 current / 3 desired
I0516 00:26:24.182] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:24.182] Pod Template:
I0516 00:26:24.182]   Labels:  app=guestbook
I0516 00:26:24.182]            tier=frontend
I0516 00:26:24.182]   Containers:
I0516 00:26:24.183]    php-redis:
I0516 00:26:24.183]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0516 00:26:24.308] Namespace:    namespace-1557966381-14375
I0516 00:26:24.308] Selector:     app=guestbook,tier=frontend
I0516 00:26:24.309] Labels:       app=guestbook
I0516 00:26:24.309]               tier=frontend
I0516 00:26:24.309] Annotations:  <none>
I0516 00:26:24.309] Replicas:     3 current / 3 desired
I0516 00:26:24.309] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:24.309] Pod Template:
I0516 00:26:24.309]   Labels:  app=guestbook
I0516 00:26:24.309]            tier=frontend
I0516 00:26:24.309]   Containers:
I0516 00:26:24.309]    php-redis:
I0516 00:26:24.309]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0516 00:26:24.443] Namespace:    namespace-1557966381-14375
I0516 00:26:24.443] Selector:     app=guestbook,tier=frontend
I0516 00:26:24.443] Labels:       app=guestbook
I0516 00:26:24.443]               tier=frontend
I0516 00:26:24.443] Annotations:  <none>
I0516 00:26:24.444] Replicas:     3 current / 3 desired
I0516 00:26:24.444] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:24.444] Pod Template:
I0516 00:26:24.444]   Labels:  app=guestbook
I0516 00:26:24.444]            tier=frontend
I0516 00:26:24.444]   Containers:
I0516 00:26:24.444]    php-redis:
I0516 00:26:24.444]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0516 00:26:24.604] Namespace:    namespace-1557966381-14375
I0516 00:26:24.605] Selector:     app=guestbook,tier=frontend
I0516 00:26:24.605] Labels:       app=guestbook
I0516 00:26:24.605]               tier=frontend
I0516 00:26:24.605] Annotations:  <none>
I0516 00:26:24.605] Replicas:     3 current / 3 desired
I0516 00:26:24.605] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:24.605] Pod Template:
I0516 00:26:24.605]   Labels:  app=guestbook
I0516 00:26:24.606]            tier=frontend
I0516 00:26:24.606]   Containers:
I0516 00:26:24.606]    php-redis:
I0516 00:26:24.606]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0516 00:26:24.727] Namespace:    namespace-1557966381-14375
I0516 00:26:24.728] Selector:     app=guestbook,tier=frontend
I0516 00:26:24.728] Labels:       app=guestbook
I0516 00:26:24.728]               tier=frontend
I0516 00:26:24.728] Annotations:  <none>
I0516 00:26:24.728] Replicas:     3 current / 3 desired
I0516 00:26:24.728] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:24.728] Pod Template:
I0516 00:26:24.728]   Labels:  app=guestbook
I0516 00:26:24.728]            tier=frontend
I0516 00:26:24.728]   Containers:
I0516 00:26:24.729]    php-redis:
I0516 00:26:24.729]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0516 00:26:24.845] Namespace:    namespace-1557966381-14375
I0516 00:26:24.845] Selector:     app=guestbook,tier=frontend
I0516 00:26:24.846] Labels:       app=guestbook
I0516 00:26:24.846]               tier=frontend
I0516 00:26:24.846] Annotations:  <none>
I0516 00:26:24.846] Replicas:     3 current / 3 desired
I0516 00:26:24.846] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:24.846] Pod Template:
I0516 00:26:24.846]   Labels:  app=guestbook
I0516 00:26:24.846]            tier=frontend
I0516 00:26:24.847]   Containers:
I0516 00:26:24.847]    php-redis:
I0516 00:26:24.847]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0516 00:26:24.962] Namespace:    namespace-1557966381-14375
I0516 00:26:24.962] Selector:     app=guestbook,tier=frontend
I0516 00:26:24.962] Labels:       app=guestbook
I0516 00:26:24.962]               tier=frontend
I0516 00:26:24.962] Annotations:  <none>
I0516 00:26:24.963] Replicas:     3 current / 3 desired
I0516 00:26:24.963] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:24.963] Pod Template:
I0516 00:26:24.963]   Labels:  app=guestbook
I0516 00:26:24.963]            tier=frontend
I0516 00:26:24.963]   Containers:
I0516 00:26:24.963]    php-redis:
I0516 00:26:24.963]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 180 lines ...
I0516 00:26:30.986] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0516 00:26:31.093] apps.sh:651: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0516 00:26:31.180] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0516 00:26:31.285] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0516 00:26:31.430] apps.sh:655: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0516 00:26:31.489] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0516 00:26:31.590] Error: required flag(s) "max" not set
W0516 00:26:31.590] 
W0516 00:26:31.590] 
W0516 00:26:31.590] Examples:
W0516 00:26:31.590]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0516 00:26:31.591]   kubectl autoscale deployment foo --min=2 --max=10
W0516 00:26:31.591]   
... skipping 89 lines ...
I0516 00:26:35.263] (Bapps.sh:439: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0516 00:26:35.374] (Bapps.sh:440: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0516 00:26:35.493] (Bstatefulset.apps/nginx rolled back
I0516 00:26:35.609] apps.sh:443: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0516 00:26:35.722] (Bapps.sh:444: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0516 00:26:35.838] (BSuccessful
I0516 00:26:35.839] message:error: unable to find specified revision 1000000 in history
I0516 00:26:35.839] has:unable to find specified revision
I0516 00:26:35.941] apps.sh:448: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0516 00:26:36.047] (Bapps.sh:449: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0516 00:26:36.162] (Bstatefulset.apps/nginx rolled back
I0516 00:26:36.275] apps.sh:452: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0516 00:26:36.380] (Bapps.sh:453: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0516 00:26:38.635] Name:         mock
I0516 00:26:38.635] Namespace:    namespace-1557966397-28399
I0516 00:26:38.635] Selector:     app=mock
I0516 00:26:38.635] Labels:       app=mock
I0516 00:26:38.635] Annotations:  <none>
I0516 00:26:38.635] Replicas:     1 current / 1 desired
I0516 00:26:38.636] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:38.636] Pod Template:
I0516 00:26:38.636]   Labels:  app=mock
I0516 00:26:38.636]   Containers:
I0516 00:26:38.636]    mock-container:
I0516 00:26:38.636]     Image:        k8s.gcr.io/pause:2.0
I0516 00:26:38.636]     Port:         9949/TCP
... skipping 56 lines ...
I0516 00:26:41.306] Name:         mock
I0516 00:26:41.306] Namespace:    namespace-1557966397-28399
I0516 00:26:41.306] Selector:     app=mock
I0516 00:26:41.306] Labels:       app=mock
I0516 00:26:41.307] Annotations:  <none>
I0516 00:26:41.307] Replicas:     1 current / 1 desired
I0516 00:26:41.307] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:41.307] Pod Template:
I0516 00:26:41.307]   Labels:  app=mock
I0516 00:26:41.307]   Containers:
I0516 00:26:41.307]    mock-container:
I0516 00:26:41.307]     Image:        k8s.gcr.io/pause:2.0
I0516 00:26:41.307]     Port:         9949/TCP
... skipping 56 lines ...
I0516 00:26:43.933] Name:         mock
I0516 00:26:43.933] Namespace:    namespace-1557966397-28399
I0516 00:26:43.933] Selector:     app=mock
I0516 00:26:43.933] Labels:       app=mock
I0516 00:26:43.934] Annotations:  <none>
I0516 00:26:43.934] Replicas:     1 current / 1 desired
I0516 00:26:43.934] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:43.934] Pod Template:
I0516 00:26:43.934]   Labels:  app=mock
I0516 00:26:43.934]   Containers:
I0516 00:26:43.934]    mock-container:
I0516 00:26:43.934]     Image:        k8s.gcr.io/pause:2.0
I0516 00:26:43.934]     Port:         9949/TCP
... skipping 42 lines ...
I0516 00:26:46.425] Namespace:    namespace-1557966397-28399
I0516 00:26:46.426] Selector:     app=mock
I0516 00:26:46.426] Labels:       app=mock
I0516 00:26:46.426]               status=replaced
I0516 00:26:46.426] Annotations:  <none>
I0516 00:26:46.426] Replicas:     1 current / 1 desired
I0516 00:26:46.426] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:46.426] Pod Template:
I0516 00:26:46.426]   Labels:  app=mock
I0516 00:26:46.426]   Containers:
I0516 00:26:46.426]    mock-container:
I0516 00:26:46.426]     Image:        k8s.gcr.io/pause:2.0
I0516 00:26:46.427]     Port:         9949/TCP
... skipping 11 lines ...
I0516 00:26:46.434] Namespace:    namespace-1557966397-28399
I0516 00:26:46.434] Selector:     app=mock2
I0516 00:26:46.434] Labels:       app=mock2
I0516 00:26:46.434]               status=replaced
I0516 00:26:46.434] Annotations:  <none>
I0516 00:26:46.434] Replicas:     1 current / 1 desired
I0516 00:26:46.434] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 00:26:46.434] Pod Template:
I0516 00:26:46.434]   Labels:  app=mock2
I0516 00:26:46.435]   Containers:
I0516 00:26:46.435]    mock-container:
I0516 00:26:46.435]     Image:        k8s.gcr.io/pause:2.0
I0516 00:26:46.435]     Port:         9949/TCP
... skipping 103 lines ...
I0516 00:26:52.148] +++ [0516 00:26:52] Creating namespace namespace-1557966412-7706
I0516 00:26:52.227] namespace/namespace-1557966412-7706 created
I0516 00:26:52.309] Context "test" modified.
I0516 00:26:52.321] +++ [0516 00:26:52] Testing persistent volumes
I0516 00:26:52.429] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:26:52.635] (Bpersistentvolume/pv0001 created
W0516 00:26:52.736] E0516 00:26:52.644237   50880 pv_protection_controller.go:117] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I0516 00:26:52.836] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0516 00:26:52.849] (Bpersistentvolume "pv0001" deleted
I0516 00:26:53.068] persistentvolume/pv0002 created
W0516 00:26:53.169] E0516 00:26:53.073219   50880 pv_protection_controller.go:117] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I0516 00:26:53.269] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0516 00:26:53.302] (Bpersistentvolume "pv0002" deleted
I0516 00:26:53.514] persistentvolume/pv0003 created
W0516 00:26:53.614] E0516 00:26:53.520269   50880 pv_protection_controller.go:117] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I0516 00:26:53.715] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0516 00:26:53.726] (Bpersistentvolume "pv0003" deleted
I0516 00:26:53.841] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:26:54.052] (Bpersistentvolume/pv0001 created
W0516 00:26:54.153] E0516 00:26:54.056893   50880 pv_protection_controller.go:117] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I0516 00:26:54.254] storage.sh:45: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0516 00:26:54.293] (BSuccessful
I0516 00:26:54.293] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0516 00:26:54.293] persistentvolume "pv0001" deleted
I0516 00:26:54.294] has:warning: deleting cluster-scoped resources
I0516 00:26:54.296] Successful
... skipping 491 lines ...
I0516 00:26:59.858] yes
I0516 00:26:59.859] has:the server doesn't have a resource type
I0516 00:26:59.955] Successful
I0516 00:26:59.956] message:yes
I0516 00:26:59.956] has:yes
I0516 00:27:00.041] Successful
I0516 00:27:00.041] message:error: --subresource can not be used with NonResourceURL
I0516 00:27:00.041] has:subresource can not be used with NonResourceURL
I0516 00:27:00.133] Successful
I0516 00:27:00.235] Successful
I0516 00:27:00.236] message:yes
I0516 00:27:00.236] 0
I0516 00:27:00.236] has:0
... skipping 39 lines ...
W0516 00:27:01.050] 		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
I0516 00:27:01.151] legacy-script.sh:801: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0516 00:27:01.164] (Blegacy-script.sh:802: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0516 00:27:01.268] (Blegacy-script.sh:803: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0516 00:27:01.375] (Blegacy-script.sh:804: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0516 00:27:01.468] (BSuccessful
I0516 00:27:01.491] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0516 00:27:01.491] has:only rbac.authorization.k8s.io/v1 is supported
I0516 00:27:01.560] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0516 00:27:01.566] role.rbac.authorization.k8s.io "testing-R" deleted
I0516 00:27:01.576] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0516 00:27:01.586] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0516 00:27:01.600] Recording: run_retrieve_multiple_tests
... skipping 33 lines ...
I0516 00:27:03.018] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0516 00:27:03.022] +++ working dir: /go/src/k8s.io/kubernetes
I0516 00:27:03.026] +++ command: run_kubectl_explain_tests
I0516 00:27:03.039] +++ [0516 00:27:03] Testing kubectl(v1:explain)
W0516 00:27:03.140] I0516 00:27:02.863519   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557966421-18299", Name:"cassandra", UID:"dd10629e-a571-4a5d-beca-a81d37914c36", APIVersion:"v1", ResourceVersion:"3025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-9t7zq
W0516 00:27:03.140] I0516 00:27:02.878265   50880 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557966421-18299", Name:"cassandra", UID:"dd10629e-a571-4a5d-beca-a81d37914c36", APIVersion:"v1", ResourceVersion:"3025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-mq9l2
W0516 00:27:03.140] E0516 00:27:02.882865   50880 replica_set.go:450] Sync "namespace-1557966421-18299/cassandra" failed with replicationcontrollers "cassandra" not found
I0516 00:27:03.247] KIND:     Pod
I0516 00:27:03.248] VERSION:  v1
I0516 00:27:03.248] 
I0516 00:27:03.248] DESCRIPTION:
I0516 00:27:03.248]      Pod is a collection of containers that can run on a host. This resource is
I0516 00:27:03.248]      created by clients and scheduled onto hosts.
... skipping 977 lines ...
I0516 00:27:33.181] message:node/127.0.0.1 already uncordoned (dry run)
I0516 00:27:33.182] has:already uncordoned
I0516 00:27:33.297] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0516 00:27:33.402] (Bnode/127.0.0.1 labeled
I0516 00:27:33.535] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0516 00:27:33.630] (BSuccessful
I0516 00:27:33.631] message:error: cannot specify both a node name and a --selector option
I0516 00:27:33.631] See 'kubectl drain -h' for help and examples
I0516 00:27:33.631] has:cannot specify both a node name
I0516 00:27:33.719] Successful
I0516 00:27:33.719] message:error: USAGE: cordon NODE [flags]
I0516 00:27:33.719] See 'kubectl cordon -h' for help and examples
I0516 00:27:33.719] has:error\: USAGE\: cordon NODE
I0516 00:27:33.813] node/127.0.0.1 already uncordoned
I0516 00:27:33.909] Successful
I0516 00:27:33.909] message:error: You must provide one or more resources by argument or filename.
I0516 00:27:33.909] Example resource specifications include:
I0516 00:27:33.909]    '-f rsrc.yaml'
I0516 00:27:33.909]    '--filename=rsrc.json'
I0516 00:27:33.909]    '<resource> <name>'
I0516 00:27:33.910]    '<resource>'
I0516 00:27:33.910] has:must provide one or more resources
... skipping 15 lines ...
I0516 00:27:34.504] Successful
I0516 00:27:34.505] message:The following compatible plugins are available:
I0516 00:27:34.505] 
I0516 00:27:34.505] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0516 00:27:34.505]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0516 00:27:34.505] 
I0516 00:27:34.505] error: one plugin warning was found
I0516 00:27:34.505] has:kubectl-version overwrites existing command: "kubectl version"
I0516 00:27:34.603] Successful
I0516 00:27:34.603] message:The following compatible plugins are available:
I0516 00:27:34.603] 
I0516 00:27:34.603] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0516 00:27:34.603] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0516 00:27:34.604]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0516 00:27:34.604] 
I0516 00:27:34.604] error: one plugin warning was found
I0516 00:27:34.604] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0516 00:27:34.697] Successful
I0516 00:27:34.697] message:The following compatible plugins are available:
I0516 00:27:34.698] 
I0516 00:27:34.698] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0516 00:27:34.698] has:plugins are available
I0516 00:27:34.791] Successful
I0516 00:27:34.792] message:Unable read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
I0516 00:27:34.792] error: unable to find any kubectl plugins in your PATH
I0516 00:27:34.792] has:unable to find any kubectl plugins in your PATH
I0516 00:27:34.884] Successful
I0516 00:27:34.884] message:I am plugin foo
I0516 00:27:34.884] has:plugin foo
I0516 00:27:34.987] Successful
I0516 00:27:34.988] message:Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.0-alpha.0.61+5945096bc5f99c", GitCommit:"5945096bc5f99c1108133737ba79d1b49c193d4f", GitTreeState:"clean", BuildDate:"2019-05-16T00:20:19Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0516 00:27:35.105] 
I0516 00:27:35.109] +++ Running case: test-cmd.run_impersonation_tests 
I0516 00:27:35.113] +++ working dir: /go/src/k8s.io/kubernetes
I0516 00:27:35.118] +++ command: run_impersonation_tests
I0516 00:27:35.133] +++ [0516 00:27:35] Testing impersonation
I0516 00:27:35.225] Successful
I0516 00:27:35.225] message:error: requesting groups or user-extra for  without impersonating a user
I0516 00:27:35.225] has:without impersonating a user
I0516 00:27:35.464] certificatesigningrequest.certificates.k8s.io/foo created
I0516 00:27:35.612] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0516 00:27:35.726] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0516 00:27:35.824] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0516 00:27:36.075] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 56 lines ...
W0516 00:27:39.570] I0516 00:27:39.567776   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.570] I0516 00:27:39.567786   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.570] I0516 00:27:39.567942   47549 controller.go:176] Shutting down kubernetes service endpoint reconciler
W0516 00:27:39.570] I0516 00:27:39.568121   47549 secure_serving.go:160] Stopped listening on 127.0.0.1:8080
W0516 00:27:39.570] I0516 00:27:39.568201   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.570] I0516 00:27:39.568214   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.571] W0516 00:27:39.568559   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.571] I0516 00:27:39.568615   47549 secure_serving.go:160] Stopped listening on 127.0.0.1:6443
W0516 00:27:39.571] I0516 00:27:39.568674   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.571] I0516 00:27:39.568685   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.571] W0516 00:27:39.568788   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.571] W0516 00:27:39.568577   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.572] I0516 00:27:39.569089   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.572] I0516 00:27:39.569124   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.572] I0516 00:27:39.569261   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.572] I0516 00:27:39.569285   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.572] I0516 00:27:39.569318   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.572] I0516 00:27:39.569347   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.572] I0516 00:27:39.569378   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.572] I0516 00:27:39.569387   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.572] I0516 00:27:39.569460   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.573] I0516 00:27:39.569472   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.573] I0516 00:27:39.569504   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.573] I0516 00:27:39.569518   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.573] W0516 00:27:39.569579   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.573] W0516 00:27:39.569638   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.574] I0516 00:27:39.569718   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.574] I0516 00:27:39.569731   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.574] W0516 00:27:39.569745   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.574] I0516 00:27:39.569762   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.574] I0516 00:27:39.569771   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.574] I0516 00:27:39.569813   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.574] I0516 00:27:39.569822   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.574] I0516 00:27:39.569853   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.575] I0516 00:27:39.569861   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.575] I0516 00:27:39.569894   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.575] I0516 00:27:39.569903   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.575] I0516 00:27:39.569930   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.575] I0516 00:27:39.569939   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.575] I0516 00:27:39.569965   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.575] I0516 00:27:39.569975   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.575] I0516 00:27:39.570005   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.576] W0516 00:27:39.570012   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.576] I0516 00:27:39.570014   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.576] I0516 00:27:39.570044   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.576] I0516 00:27:39.570051   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.577] W0516 00:27:39.570399   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.577] I0516 00:27:39.570649   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.577] I0516 00:27:39.570665   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.577] I0516 00:27:39.570676   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.577] I0516 00:27:39.570680   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.577] I0516 00:27:39.570710   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.577] I0516 00:27:39.570728   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 20 lines ...
W0516 00:27:39.580] I0516 00:27:39.570977   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.580] I0516 00:27:39.570988   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.580] I0516 00:27:39.571021   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.580] I0516 00:27:39.571031   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.580] I0516 00:27:39.571078   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.581] I0516 00:27:39.568987   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.581] W0516 00:27:39.571596   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.581] W0516 00:27:39.571711   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.581] W0516 00:27:39.571755   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.581] W0516 00:27:39.571792   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.582] W0516 00:27:39.571825   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.582] W0516 00:27:39.571836   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.582] W0516 00:27:39.571856   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.582] I0516 00:27:39.571874   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.582] W0516 00:27:39.571891   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.582] I0516 00:27:39.571904   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.583] W0516 00:27:39.571924   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.583] I0516 00:27:39.571937   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.583] W0516 00:27:39.571957   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.583] W0516 00:27:39.571991   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.583] I0516 00:27:39.572025   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.584] W0516 00:27:39.572025   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.584] W0516 00:27:39.572057   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.584] I0516 00:27:39.572057   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.584] W0516 00:27:39.572066   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.584] I0516 00:27:39.572075   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.585] W0516 00:27:39.572100   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.585] W0516 00:27:39.572136   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.585] W0516 00:27:39.572155   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.585] I0516 00:27:39.572192   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.585] I0516 00:27:39.572221   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.585] I0516 00:27:39.572250   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.586] I0516 00:27:39.572278   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.586] I0516 00:27:39.572306   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.586] W0516 00:27:39.572320   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.586] I0516 00:27:39.572394   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.586] I0516 00:27:39.572513   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.586] I0516 00:27:39.572545   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.586] I0516 00:27:39.572571   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.587] I0516 00:27:39.572636   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.587] W0516 00:27:39.572647   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.587] W0516 00:27:39.572652   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.587] I0516 00:27:39.572671   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.587] I0516 00:27:39.572698   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.587] W0516 00:27:39.572698   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.587] I0516 00:27:39.572725   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.588] I0516 00:27:39.572752   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.588] W0516 00:27:39.572753   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.588] W0516 00:27:39.572796   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.588] W0516 00:27:39.572847   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.588] W0516 00:27:39.572907   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.589] I0516 00:27:39.572958   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.589] W0516 00:27:39.572962   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.589] I0516 00:27:39.572991   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.589] I0516 00:27:39.572998   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.589] I0516 00:27:39.573020   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.589] W0516 00:27:39.573020   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.589] I0516 00:27:39.573024   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.590] I0516 00:27:39.573046   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.590] I0516 00:27:39.573051   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.590] W0516 00:27:39.573067   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.590] W0516 00:27:39.573089   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.590] I0516 00:27:39.573105   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.590] W0516 00:27:39.573113   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.591] W0516 00:27:39.573127   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.591] I0516 00:27:39.573135   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.591] W0516 00:27:39.573155   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.591] I0516 00:27:39.573161   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.591] W0516 00:27:39.573162   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.591] I0516 00:27:39.573191   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.592] W0516 00:27:39.573195   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.592] W0516 00:27:39.573196   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.592] I0516 00:27:39.573219   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.592] I0516 00:27:39.573280   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.592] W0516 00:27:39.573296   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.592] W0516 00:27:39.573304   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.593] W0516 00:27:39.573309   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.593] W0516 00:27:39.573336   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.593] W0516 00:27:39.573344   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.593] W0516 00:27:39.573357   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.594] W0516 00:27:39.573376   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.594] W0516 00:27:39.573380   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.594] W0516 00:27:39.573391   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.594] W0516 00:27:39.573403   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.594] W0516 00:27:39.573410   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.595] W0516 00:27:39.573448   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.595] W0516 00:27:39.573449   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.595] W0516 00:27:39.573465   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.595] W0516 00:27:39.573469   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.595] W0516 00:27:39.573483   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.596] W0516 00:27:39.573499   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.596] W0516 00:27:39.573523   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.596] W0516 00:27:39.573530   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.596] W0516 00:27:39.573532   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.596] W0516 00:27:39.573563   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.597] W0516 00:27:39.573560   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.597] W0516 00:27:39.573567   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.597] W0516 00:27:39.573590   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.597] W0516 00:27:39.573605   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.597] W0516 00:27:39.573611   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.598] W0516 00:27:39.573623   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.598] W0516 00:27:39.573626   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.598] W0516 00:27:39.573645   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.598] W0516 00:27:39.573651   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.598] W0516 00:27:39.573661   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.598] I0516 00:27:39.573663   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.599] I0516 00:27:39.573729   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.599] I0516 00:27:39.573870   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.599] I0516 00:27:39.574050   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.599] I0516 00:27:39.574085   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.599] I0516 00:27:39.574117   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 24 lines ...
W0516 00:27:39.602] I0516 00:27:39.575306   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.602] I0516 00:27:39.575477   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.602] I0516 00:27:39.575587   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.603] I0516 00:27:39.575683   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.603] I0516 00:27:39.575836   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.603] I0516 00:27:39.575891   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.603] E0516 00:27:39.579958   47549 controller.go:179] rpc error: code = Unavailable desc = transport is closing
W0516 00:27:39.603] W0516 00:27:39.581921   47549 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:27:39.603] I0516 00:27:39.582654   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.603] I0516 00:27:39.582929   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
W0516 00:27:39.604] I0516 00:27:39.583776   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:27:39.604] I0516 00:27:39.583976   47549 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0516 00:27:39.704] No resources found
I0516 00:27:39.704] No resources found
... skipping 9 lines ...
I0516 00:27:45.048] +++ [0516 00:27:45] On try 2, etcd: : http://127.0.0.1:2379
I0516 00:27:45.059] {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
I0516 00:27:45.065] +++ [0516 00:27:45] Running integration test cases
I0516 00:27:50.622] Running tests for APIVersion: v1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1beta1,apps/v1,apps/v1beta1,apps/v1beta2,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,autoscaling/v2beta2,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,coordination.k8s.io/v1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,networking.k8s.io/v1beta1,node.k8s.io/v1alpha1,node.k8s.io/v1beta1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,scheduling.k8s.io/v1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,
I0516 00:27:50.670] +++ [0516 00:27:50] Running tests without code coverage
W0516 00:29:03.087] # k8s.io/kubernetes/test/e2e/scheduling
W0516 00:29:03.087] test/e2e/scheduling/nvidia-gpus.go:275:3: undefined: ginkgo.Failf
I0516 00:41:03.053] ok  	k8s.io/kubernetes/test/integration/apimachinery	276.888s
I0516 00:41:03.054] ok  	k8s.io/kubernetes/test/integration/apiserver	82.320s
I0516 00:41:03.054] ok  	k8s.io/kubernetes/test/integration/apiserver/admissionwebhook	66.391s
I0516 00:41:03.054] ok  	k8s.io/kubernetes/test/integration/apiserver/apply	51.586s
I0516 00:41:03.054] FAIL	k8s.io/kubernetes/test/integration/auth [build failed]
I0516 00:41:03.054] ok  	k8s.io/kubernetes/test/integration/client	49.198s
I0516 00:41:03.055] ok  	k8s.io/kubernetes/test/integration/configmap	3.855s
I0516 00:41:03.055] ok  	k8s.io/kubernetes/test/integration/cronjob	34.757s
I0516 00:41:03.055] ok  	k8s.io/kubernetes/test/integration/daemonset	531.158s
I0516 00:41:03.055] ok  	k8s.io/kubernetes/test/integration/defaulttolerationseconds	3.730s
I0516 00:41:03.055] ok  	k8s.io/kubernetes/test/integration/deployment	204.740s
... skipping 25 lines ...
I0516 00:41:03.058] ok  	k8s.io/kubernetes/test/integration/storageclasses	3.680s
I0516 00:41:03.058] ok  	k8s.io/kubernetes/test/integration/tls	6.625s
I0516 00:41:03.058] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	9.887s
I0516 00:41:03.058] ok  	k8s.io/kubernetes/test/integration/volume	92.560s
I0516 00:41:03.058] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	192.110s
I0516 00:41:17.542] +++ [0516 00:41:17] Saved JUnit XML test report to /workspace/artifacts/junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190516-002750.xml
I0516 00:41:17.546] Makefile:185: recipe for target 'test' failed
I0516 00:41:17.558] +++ [0516 00:41:17] Cleaning up etcd
W0516 00:41:17.659] make[1]: *** [test] Error 1
W0516 00:41:17.659] !!! [0516 00:41:17] Call tree:
W0516 00:41:17.660] !!! [0516 00:41:17]  1: hack/make-rules/test-integration.sh:102 runTests(...)
I0516 00:41:18.233] +++ [0516 00:41:18] Integration test cleanup complete
I0516 00:41:18.233] Makefile:204: recipe for target 'test-integration' failed
W0516 00:41:18.334] make: *** [test-integration] Error 1
W0516 00:41:22.965] Traceback (most recent call last):
W0516 00:41:22.965]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0516 00:41:22.965]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0516 00:41:22.965]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0516 00:41:22.966]     check(*cmd)
W0516 00:41:22.966]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0516 00:41:22.966]     subprocess.check_call(cmd)
W0516 00:41:22.966]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0516 00:41:23.003]     raise CalledProcessError(retcode, cmd)
W0516 00:41:23.004] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.14-v20190318-2ac98e338', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0516 00:41:23.012] Command failed
I0516 00:41:23.012] process 677 exited with code 1 after 28.4m
E0516 00:41:23.012] FAIL: pull-kubernetes-integration
I0516 00:41:23.013] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0516 00:41:23.694] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0516 00:41:23.747] process 111822 exited with code 0 after 0.0m
I0516 00:41:23.747] Call:  gcloud config get-value account
I0516 00:41:24.064] process 111834 exited with code 0 after 0.0m
I0516 00:41:24.064] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0516 00:41:24.064] Upload result and artifacts...
I0516 00:41:24.064] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/76401/pull-kubernetes-integration/1128815263098081284
I0516 00:41:24.065] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/76401/pull-kubernetes-integration/1128815263098081284/artifacts
W0516 00:41:25.205] CommandException: One or more URLs matched no objects.
E0516 00:41:25.355] Command failed
I0516 00:41:25.355] process 111846 exited with code 1 after 0.0m
W0516 00:41:25.355] Remote dir gs://kubernetes-jenkins/pr-logs/pull/76401/pull-kubernetes-integration/1128815263098081284/artifacts not exist yet
I0516 00:41:25.356] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/76401/pull-kubernetes-integration/1128815263098081284/artifacts
I0516 00:41:30.236] process 111988 exited with code 0 after 0.1m
W0516 00:41:30.237] metadata path /workspace/_artifacts/metadata.json does not exist
W0516 00:41:30.237] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...