This job view page is being replaced by Spyglass soon. Check out the new job view.
PRHuang-Wei: [1.11] Automated cherry pick of #75144: kubelet: updated logic of verifying a static critical pod
ResultFAILURE
Tests 1 failed / 74 succeeded
Started2019-03-15 20:41
Elapsed14m40s
Revision
Buildergke-prow-containerd-pool-99179761-4377
Refs release-1.11:ede55fd5
74996:dd30816a
pod87e9189d-4762-11e9-82d4-0a580a6c0fcd
infra-commit5def04463
pod87e9189d-4762-11e9-82d4-0a580a6c0fcd
repok8s.io/kubernetes
repo-commitc4300c6bbf8666a97923aaecb66c6129fed32724
repos{u'k8s.io/kubernetes': u'release-1.11:ede55fd572985547208c79eb73c122f3e8f7f79c,74996:dd30816a231549183a766688f80e4e8336fe715a'}

Test Failures


test-cmd run_rs_tests 46s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=test\-cmd\srun\_rs\_tests$'
!!! [0315 20:54:49] Call tree:
!!! [0315 20:54:49]  1: /go/src/k8s.io/kubernetes/hack/make-rules/test-cmd-util.sh:3500 kube::test::wait_object_assert(...)
!!! [0315 20:54:49]  2: /go/src/k8s.io/kubernetes/hack/make-rules/../../third_party/forked/shell2junit/sh2ju.sh:47 run_rs_tests(...)
!!! [0315 20:54:49]  3: /go/src/k8s.io/kubernetes/hack/make-rules/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0315 20:54:49]  4: /go/src/k8s.io/kubernetes/hack/make-rules/test-cmd-util.sh:107 juLog(...)
!!! [0315 20:54:49]  5: /go/src/k8s.io/kubernetes/hack/make-rules/test-cmd-util.sh:5449 record_command(...)
!!! [0315 20:54:49]  6: hack/make-rules/test-cmd.sh:105 runTests(...)
				
				Click to see stdout/stderrfrom junit_test-cmd.xml

Filter through log files | View test history on testgrid


Show 74 Passed Tests

Error lines from build-log.txt

... skipping 603 lines ...
W0315 20:50:17.604] I0315 20:50:17.604279   73057 serviceaccounts_controller.go:115] Starting service account controller
W0315 20:50:17.604] I0315 20:50:17.604288   73057 controller_utils.go:1025] Waiting for caches to sync for service account controller
W0315 20:50:17.604] I0315 20:50:17.604449   73057 taint_manager.go:184] Sending events to api server.
W0315 20:50:17.605] I0315 20:50:17.604501   73057 controllermanager.go:479] Started "nodelifecycle"
W0315 20:50:17.605] I0315 20:50:17.604617   73057 node_lifecycle_controller.go:361] Starting node controller
W0315 20:50:17.605] I0315 20:50:17.604635   73057 controller_utils.go:1025] Waiting for caches to sync for taint controller
W0315 20:50:17.605] W0315 20:50:17.604656   73057 garbagecollector.go:649] failed to discover preferred resources: the cache has not been filled yet
W0315 20:50:17.605] I0315 20:50:17.605122   73057 controllermanager.go:479] Started "garbagecollector"
W0315 20:50:17.605] I0315 20:50:17.605144   73057 garbagecollector.go:133] Starting garbage collector controller
W0315 20:50:17.605] I0315 20:50:17.605152   73057 controller_utils.go:1025] Waiting for caches to sync for garbage collector controller
W0315 20:50:17.605] I0315 20:50:17.605163   73057 graph_builder.go:308] GraphBuilder running
W0315 20:50:17.605] I0315 20:50:17.605381   73057 controllermanager.go:479] Started "cronjob"
W0315 20:50:17.606] I0315 20:50:17.605401   73057 cronjob_controller.go:94] Starting CronJob Manager
W0315 20:50:17.606] E0315 20:50:17.605664   73057 core.go:72] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0315 20:50:17.606] W0315 20:50:17.605682   73057 controllermanager.go:476] Skipping "service"
W0315 20:50:17.682] I0315 20:50:17.681991   73057 controller_utils.go:1032] Caches are synced for disruption controller
W0315 20:50:17.682] I0315 20:50:17.682027   73057 disruption.go:296] Sending events to api server.
W0315 20:50:17.684] I0315 20:50:17.683994   73057 controller_utils.go:1032] Caches are synced for ClusterRoleAggregator controller
W0315 20:50:17.684] I0315 20:50:17.684257   73057 controller_utils.go:1032] Caches are synced for PV protection controller
W0315 20:50:17.684] I0315 20:50:17.684530   73057 controller_utils.go:1032] Caches are synced for GC controller
W0315 20:50:17.689] I0315 20:50:17.688927   73057 controller_utils.go:1032] Caches are synced for namespace controller
W0315 20:50:17.690] I0315 20:50:17.689694   73057 controller_utils.go:1032] Caches are synced for deployment controller
W0315 20:50:17.690] I0315 20:50:17.690187   73057 controller_utils.go:1032] Caches are synced for TTL controller
W0315 20:50:17.690] I0315 20:50:17.690224   73057 controller_utils.go:1032] Caches are synced for certificate controller
W0315 20:50:17.690] I0315 20:50:17.690399   73057 controller_utils.go:1032] Caches are synced for PVC protection controller
W0315 20:50:17.691] E0315 20:50:17.691100   73057 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0315 20:50:17.692] E0315 20:50:17.692184   73057 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0315 20:50:17.699] I0315 20:50:17.699508   73057 controller_utils.go:1032] Caches are synced for attach detach controller
W0315 20:50:17.700] I0315 20:50:17.699544   73057 controller_utils.go:1032] Caches are synced for expand controller
W0315 20:50:17.700] I0315 20:50:17.700434   73057 controller_utils.go:1032] Caches are synced for HPA controller
W0315 20:50:17.703] I0315 20:50:17.702861   73057 controller_utils.go:1032] Caches are synced for persistent volume controller
W0315 20:50:17.703] I0315 20:50:17.703216   73057 controller_utils.go:1032] Caches are synced for job controller
W0315 20:50:17.703] I0315 20:50:17.703558   73057 controller_utils.go:1032] Caches are synced for ReplicaSet controller
... skipping 8 lines ...
W0315 20:50:17.897] I0315 20:50:17.897510   73057 controller_utils.go:1032] Caches are synced for resource quota controller
I0315 20:50:18.532] +++ [0315 20:50:18] On try 3, controller-manager: ok
I0315 20:50:18.720] node/127.0.0.1 created
I0315 20:50:18.729] +++ [0315 20:50:18] Checking kubectl version
I0315 20:50:18.800] Client Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.9-beta.0.26+c4300c6bbf8666", GitCommit:"c4300c6bbf8666a97923aaecb66c6129fed32724", GitTreeState:"clean", BuildDate:"2019-03-15T20:47:54Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
I0315 20:50:18.800] Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.9-beta.0.26+c4300c6bbf8666", GitCommit:"c4300c6bbf8666a97923aaecb66c6129fed32724", GitTreeState:"clean", BuildDate:"2019-03-15T20:48:18Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
W0315 20:50:18.901] W0315 20:50:18.721228   73057 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0315 20:50:18.901] I0315 20:50:18.876207   73057 controller_utils.go:1025] Waiting for caches to sync for garbage collector controller
W0315 20:50:18.901] I0315 20:50:18.892757   73057 controller_utils.go:1025] Waiting for caches to sync for resource quota controller
W0315 20:50:18.905] I0315 20:50:18.905406   73057 controller_utils.go:1032] Caches are synced for garbage collector controller
W0315 20:50:18.906] I0315 20:50:18.905432   73057 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W0315 20:50:18.977] I0315 20:50:18.976821   73057 controller_utils.go:1032] Caches are synced for garbage collector controller
W0315 20:50:18.993] I0315 20:50:18.993129   73057 controller_utils.go:1032] Caches are synced for resource quota controller
... skipping 80 lines ...
I0315 20:50:22.860] +++ [0315 20:50:22] Creating namespace namespace-1552683022-25787
I0315 20:50:22.934] namespace/namespace-1552683022-25787 created
I0315 20:50:23.008] Context "test" modified.
I0315 20:50:23.014] +++ [0315 20:50:23] Testing RESTMapper
W0315 20:50:23.114] I0315 20:50:22.705135   73057 node_lifecycle_controller.go:1095] Initializing eviction metric for zone: 
W0315 20:50:23.115] I0315 20:50:22.705284   73057 node_lifecycle_controller.go:945] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0315 20:50:23.215] +++ [0315 20:50:23] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0315 20:50:23.216] +++ exit code: 0
I0315 20:50:23.258] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0315 20:50:23.258] bindings                                                                      true         Binding
I0315 20:50:23.259] componentstatuses                 cs                                          false        ComponentStatus
I0315 20:50:23.259] configmaps                        cm                                          true         ConfigMap
I0315 20:50:23.259] endpoints                         ep                                          true         Endpoints
... skipping 604 lines ...
I0315 20:50:42.736] poddisruptionbudget.policy/test-pdb-3 created
I0315 20:50:42.831] test-cmd-util.sh:506: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0315 20:50:42.903] poddisruptionbudget.policy/test-pdb-4 created
I0315 20:50:42.997] test-cmd-util.sh:510: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0315 20:50:43.163] test-cmd-util.sh:516: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:50:43.328] pod/env-test-pod created
W0315 20:50:43.429] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0315 20:50:43.429] error: setting 'all' parameter but found a non empty selector. 
W0315 20:50:43.429] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0315 20:50:43.430] I0315 20:50:42.390967   68826 controller.go:597] quota admission added evaluator for: {policy poddisruptionbudgets}
W0315 20:50:43.430] error: min-available and max-unavailable cannot be both specified
I0315 20:50:43.530] test-cmd-util.sh:519: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0315 20:50:43.531] Name:               env-test-pod
I0315 20:50:43.531] Namespace:          test-kubectl-describe-pod
I0315 20:50:43.531] Priority:           0
I0315 20:50:43.531] PriorityClassName:  <none>
I0315 20:50:43.531] Node:               <none>
... skipping 161 lines ...
I0315 20:50:56.691] pod/valid-pod patched
I0315 20:50:56.794] test-cmd-util.sh:721: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0315 20:50:56.877] pod/valid-pod patched
I0315 20:50:56.973] test-cmd-util.sh:726: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0315 20:50:57.139] pod/valid-pod patched
I0315 20:50:57.238] test-cmd-util.sh:742: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0315 20:50:57.418] +++ [0315 20:50:57] "kubectl patch with resourceVersion 482" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0315 20:50:57.661] pod "valid-pod" deleted
I0315 20:50:57.670] pod/valid-pod replaced
I0315 20:50:57.768] test-cmd-util.sh:766: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0315 20:50:57.914] Successful
I0315 20:50:57.914] message:error: --grace-period must have --force specified
I0315 20:50:57.914] has:\-\-grace-period must have \-\-force specified
I0315 20:50:58.062] Successful
I0315 20:50:58.063] message:error: --timeout must have --force specified
I0315 20:50:58.063] has:\-\-timeout must have \-\-force specified
I0315 20:50:58.208] node/node-v1-test created
W0315 20:50:58.308] W0315 20:50:58.207774   73057 actual_state_of_world.go:491] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0315 20:50:58.409] node/node-v1-test replaced
I0315 20:50:58.454] test-cmd-util.sh:803: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0315 20:50:58.534] node "node-v1-test" deleted
I0315 20:50:58.633] test-cmd-util.sh:810: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0315 20:50:58.885] test-cmd-util.sh:813: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0315 20:50:59.751] test-cmd-util.sh:826: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 27 lines ...
I0315 20:51:01.140] pod/redis-master created
I0315 20:51:01.143] pod/valid-pod created
W0315 20:51:01.243] Edit cancelled, no changes made.
W0315 20:51:01.243] Edit cancelled, no changes made.
W0315 20:51:01.244] Edit cancelled, no changes made.
W0315 20:51:01.244] Edit cancelled, no changes made.
W0315 20:51:01.244] error: 'name' already has a value (valid-pod), and --overwrite is false
W0315 20:51:01.244] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0315 20:51:01.344] test-cmd-util.sh:865: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0315 20:51:01.351] test-cmd-util.sh:869: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0315 20:51:01.438] pod "redis-master" deleted
I0315 20:51:01.442] pod "valid-pod" deleted
I0315 20:51:01.545] test-cmd-util.sh:873: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 71 lines ...
I0315 20:51:07.406] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0315 20:51:07.409] +++ working dir: /go/src/k8s.io/kubernetes
I0315 20:51:07.411] +++ command: run_kubectl_create_error_tests
I0315 20:51:07.423] +++ [0315 20:51:07] Creating namespace namespace-1552683067-16135
I0315 20:51:07.496] namespace/namespace-1552683067-16135 created
I0315 20:51:07.566] Context "test" modified.
I0315 20:51:07.571] +++ [0315 20:51:07] Testing kubectl create with error
W0315 20:51:07.672] Error: required flag(s) "filename" not set
W0315 20:51:07.672] 
W0315 20:51:07.672] 
W0315 20:51:07.672] Examples:
W0315 20:51:07.672]   # Create a pod using the data in pod.json.
W0315 20:51:07.672]   kubectl create -f ./pod.json
W0315 20:51:07.672]   
... skipping 38 lines ...
W0315 20:51:07.676]   kubectl create -f FILENAME [options]
W0315 20:51:07.677] 
W0315 20:51:07.677] Use "kubectl <command> --help" for more information about a given command.
W0315 20:51:07.677] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0315 20:51:07.677] 
W0315 20:51:07.677] required flag(s) "filename" not set
I0315 20:51:07.782] +++ [0315 20:51:07] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
I0315 20:51:07.955] +++ exit code: 0
I0315 20:51:07.996] Recording: run_kubectl_apply_tests
I0315 20:51:07.997] Running command: run_kubectl_apply_tests
I0315 20:51:08.015] 
I0315 20:51:08.017] +++ Running case: test-cmd.run_kubectl_apply_tests 
I0315 20:51:08.019] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 12 lines ...
I0315 20:51:09.956] deployment.extensions "test-deployment-retainkeys" deleted
I0315 20:51:10.054] test-cmd-util.sh:995: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:51:10.213] pod/selector-test-pod created
I0315 20:51:10.314] test-cmd-util.sh:999: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0315 20:51:10.401] Successful
I0315 20:51:10.401] message:No resources found.
I0315 20:51:10.401] Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0315 20:51:10.401] has:pods "selector-test-pod-dont-apply" not found
I0315 20:51:10.484] pod "selector-test-pod" deleted
I0315 20:51:10.579] test-cmd-util.sh:1009: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:51:10.745] pod/a created
W0315 20:51:10.845] I0315 20:51:08.929399   68826 controller.go:597] quota admission added evaluator for: {extensions deployments}
W0315 20:51:10.846] I0315 20:51:08.949348   68826 controller.go:597] quota admission added evaluator for: {apps replicasets}
... skipping 3 lines ...
W0315 20:51:10.847] I0315 20:51:09.542830   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683068-5997", Name:"test-deployment-retainkeys-7fb69956c", UID:"0f3d5c11-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"486", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: test-deployment-retainkeys-7fb69956c-6wh67
W0315 20:51:10.847] I0315 20:51:09.550875   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683068-5997", Name:"test-deployment-retainkeys", UID:"0f3a507f-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-deployment-retainkeys-5f667997fd to 1
W0315 20:51:10.847] I0315 20:51:09.553420   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683068-5997", Name:"test-deployment-retainkeys-5f667997fd", UID:"0f98e5d3-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"495", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-deployment-retainkeys-5f667997fd-4xnmj
I0315 20:51:12.249] test-cmd-util.sh:1014: Successful get pods a {{.metadata.name}}: a
I0315 20:51:12.341] Successful
I0315 20:51:12.341] message:No resources found.
I0315 20:51:12.341] Error from server (NotFound): pods "b" not found
I0315 20:51:12.341] has:pods "b" not found
I0315 20:51:12.488] pod/b created
I0315 20:51:12.498] pod/a pruned
I0315 20:51:14.188] test-cmd-util.sh:1022: Successful get pods b {{.metadata.name}}: b
I0315 20:51:14.274] Successful
I0315 20:51:14.274] message:No resources found.
I0315 20:51:14.275] Error from server (NotFound): pods "a" not found
I0315 20:51:14.275] has:pods "a" not found
I0315 20:51:14.356] pod "b" deleted
I0315 20:51:14.447] test-cmd-util.sh:1032: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:51:14.598] pod/a created
I0315 20:51:14.695] test-cmd-util.sh:1037: Successful get pods a {{.metadata.name}}: a
I0315 20:51:14.782] Successful
I0315 20:51:14.782] message:No resources found.
I0315 20:51:14.782] Error from server (NotFound): pods "b" not found
I0315 20:51:14.782] has:pods "b" not found
I0315 20:51:14.937] pod/b created
I0315 20:51:15.038] test-cmd-util.sh:1045: Successful get pods a {{.metadata.name}}: a
I0315 20:51:15.129] test-cmd-util.sh:1046: Successful get pods b {{.metadata.name}}: b
I0315 20:51:15.206] pod "a" deleted
I0315 20:51:15.209] pod "b" deleted
I0315 20:51:15.372] Successful
I0315 20:51:15.372] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector.
I0315 20:51:15.372] has:all resources selected for prune without explicitly passing --all
I0315 20:51:15.520] pod/a created
I0315 20:51:15.525] pod/b created
I0315 20:51:15.532] service/prune-svc created
I0315 20:51:17.035] test-cmd-util.sh:1058: Successful get pods a {{.metadata.name}}: a
I0315 20:51:17.128] test-cmd-util.sh:1059: Successful get pods b {{.metadata.name}}: b
... skipping 118 lines ...
I0315 20:51:27.742] +++ [0315 20:51:27] Testing kubectl create filter
I0315 20:51:27.830] test-cmd-util.sh:1101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:51:27.996] pod/selector-test-pod created
I0315 20:51:28.094] test-cmd-util.sh:1105: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0315 20:51:28.186] Successful
I0315 20:51:28.186] message:No resources found.
I0315 20:51:28.186] Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0315 20:51:28.186] has:pods "selector-test-pod-dont-apply" not found
I0315 20:51:28.268] pod "selector-test-pod" deleted
I0315 20:51:28.287] +++ exit code: 0
I0315 20:51:28.320] Recording: run_kubectl_apply_deployments_tests
I0315 20:51:28.321] Running command: run_kubectl_apply_deployments_tests
I0315 20:51:28.339] 
... skipping 34 lines ...
W0315 20:51:30.238] I0315 20:51:27.054499   68826 controller.go:597] quota admission added evaluator for: {batch cronjobs}
W0315 20:51:30.238] I0315 20:51:28.961339   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683088-21295", Name:"my-depl", UID:"1b2a32a1-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-844db54fcf to 1
W0315 20:51:30.238] I0315 20:51:28.965154   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683088-21295", Name:"my-depl-844db54fcf", UID:"1b2ab122-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"624", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-844db54fcf-bds7g
W0315 20:51:30.238] I0315 20:51:29.487912   68826 controller.go:597] quota admission added evaluator for: {extensions replicasets}
W0315 20:51:30.239] I0315 20:51:29.492394   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683088-21295", Name:"my-depl", UID:"1b2a32a1-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-574c668485 to 1
W0315 20:51:30.239] I0315 20:51:29.494990   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683088-21295", Name:"my-depl-574c668485", UID:"1b7bb74e-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-574c668485-gpv59
W0315 20:51:30.239] E0315 20:51:30.135732   73057 replica_set.go:450] Sync "namespace-1552683088-21295/my-depl-844db54fcf" failed with Operation cannot be fulfilled on replicasets.apps "my-depl-844db54fcf": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1552683088-21295/my-depl-844db54fcf, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 1b2ab122-4764-11e9-8898-0242ac110002, UID in object meta: 
I0315 20:51:30.340] test-cmd-util.sh:1150: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:51:30.340] test-cmd-util.sh:1151: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:51:30.417] test-cmd-util.sh:1152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:51:30.508] test-cmd-util.sh:1156: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:51:30.682] deployment.extensions/nginx created
I0315 20:51:30.782] test-cmd-util.sh:1160: Successful get deployment nginx {{.metadata.name}}: nginx
I0315 20:51:34.981] Successful
I0315 20:51:34.981] message:Error from server (Conflict): error when applying patch:
I0315 20:51:34.981] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552683088-21295\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0315 20:51:34.982] to:
I0315 20:51:34.982] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0315 20:51:34.982] Name: "nginx", Namespace: "namespace-1552683088-21295"
I0315 20:51:34.983] Object: &{map["apiVersion":"extensions/v1beta1" "metadata":map["name":"nginx" "resourceVersion":"675" "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1552683088-21295\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "namespace":"namespace-1552683088-21295" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1552683088-21295/deployments/nginx" "uid":"1c314361-4764-11e9-8898-0242ac110002" "generation":'\x01' "creationTimestamp":"2019-03-15T20:51:30Z" "labels":map["name":"nginx"]] "spec":map["revisionHistoryLimit":'\n' "progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["labels":map["name":"nginx1"] "creationTimestamp":<nil>] "spec":map["containers":[map["terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']]] "status":map["observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["message":"Deployment does not have minimum availability." "type":"Available" "status":"False" "lastUpdateTime":"2019-03-15T20:51:30Z" "lastTransitionTime":"2019-03-15T20:51:30Z" "reason":"MinimumReplicasUnavailable"]]] "kind":"Deployment"]}
I0315 20:51:34.983] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0315 20:51:34.983] has:Error from server (Conflict)
W0315 20:51:35.084] I0315 20:51:30.685073   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683088-21295", Name:"nginx", UID:"1c314361-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"662", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-74d9fbb98 to 3
W0315 20:51:35.084] I0315 20:51:30.687312   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683088-21295", Name:"nginx-74d9fbb98", UID:"1c31b7e7-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74d9fbb98-42ksn
W0315 20:51:35.085] I0315 20:51:30.689035   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683088-21295", Name:"nginx-74d9fbb98", UID:"1c31b7e7-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74d9fbb98-vtzng
W0315 20:51:35.085] I0315 20:51:30.689397   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683088-21295", Name:"nginx-74d9fbb98", UID:"1c31b7e7-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-74d9fbb98-pfgwx
W0315 20:51:36.816] I0315 20:51:36.816065   73057 horizontal.go:366] Horizontal Pod Autoscaler has been deleted namespace-1552683064-2732/frontend
I0315 20:51:40.171] deployment.extensions/nginx configured
... skipping 152 lines ...
I0315 20:51:48.195] namespace/namespace-1552683108-18995 created
I0315 20:51:48.310] Context "test" modified.
I0315 20:51:48.316] +++ [0315 20:51:48] Testing kubectl get
I0315 20:51:48.455] test-cmd-util.sh:1502: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:51:48.593] Successful
I0315 20:51:48.593] message:No resources found.
I0315 20:51:48.594] Error from server (NotFound): pods "abc" not found
I0315 20:51:48.594] has:pods "abc" not found
I0315 20:51:48.750] test-cmd-util.sh:1510: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:51:48.886] Successful
I0315 20:51:48.887] message:Error from server (NotFound): pods "abc" not found
I0315 20:51:48.888] has:pods "abc" not found
I0315 20:51:49.012] test-cmd-util.sh:1518: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:51:49.145] Successful
I0315 20:51:49.146] message:{
I0315 20:51:49.147]     "apiVersion": "v1",
I0315 20:51:49.147]     "items": [],
... skipping 33 lines ...
I0315 20:51:50.205] has not:No resources found
I0315 20:51:50.354] Successful
I0315 20:51:50.355] message:No resources found.
I0315 20:51:50.355] has:No resources found
I0315 20:51:50.489] test-cmd-util.sh:1562: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:51:50.623] Successful
I0315 20:51:50.624] message:Error from server (NotFound): pods "abc" not found
I0315 20:51:50.624] has:pods "abc" not found
I0315 20:51:50.627] FAIL!
I0315 20:51:50.627] message:Error from server (NotFound): pods "abc" not found
I0315 20:51:50.627] has not:List
I0315 20:51:50.627] 1568 /go/src/k8s.io/kubernetes/hack/make-rules/test-cmd-util.sh
I0315 20:51:50.804] Successful
I0315 20:51:50.805] message:I0315 20:51:50.733785   85150 loader.go:359] Config loaded from file /tmp/tmp.ul2PznRwVS/.kube/config
I0315 20:51:50.805] I0315 20:51:50.737445   85150 loader.go:359] Config loaded from file /tmp/tmp.ul2PznRwVS/.kube/config
I0315 20:51:50.806] I0315 20:51:50.739047   85150 round_trippers.go:405] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
... skipping 992 lines ...
I0315 20:51:54.869] }
I0315 20:51:54.972] test-cmd-util.sh:1621: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 20:51:55.231] <no value>Successful
I0315 20:51:55.231] message:valid-pod:
I0315 20:51:55.231] has:valid-pod:
I0315 20:51:55.312] Successful
I0315 20:51:55.313] message:Error executing template: missing is not found. Printing more information for debugging the template:
I0315 20:51:55.313] 	template was:
I0315 20:51:55.313] 		{.missing}
I0315 20:51:55.313] 	object given to jsonpath engine was:
I0315 20:51:55.314] 		map[string]interface {}{"apiVersion":"v1", "metadata":map[string]interface {}{"selfLink":"/api/v1/namespaces/namespace-1552683114-4935/pods/valid-pod", "uid":"2a8a4d27-4764-11e9-8898-0242ac110002", "resourceVersion":"770", "creationTimestamp":"2019-03-15T20:51:54Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1552683114-4935"}, "spec":map[string]interface {}{"priority":0, "containers":[]interface {}{map[string]interface {}{"name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler"}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}, "kind":"Pod"}
I0315 20:51:55.314] has:missing is not found
I0315 20:51:55.394] Successful
I0315 20:51:55.394] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0315 20:51:55.394] 	template was:
I0315 20:51:55.394] 		{{.missing}}
I0315 20:51:55.394] 	raw data was:
I0315 20:51:55.395] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-03-15T20:51:54Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1552683114-4935","resourceVersion":"770","selfLink":"/api/v1/namespaces/namespace-1552683114-4935/pods/valid-pod","uid":"2a8a4d27-4764-11e9-8898-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0315 20:51:55.395] 	object given to template engine was:
I0315 20:51:55.396] 		map[kind:Pod metadata:map[namespace:namespace-1552683114-4935 resourceVersion:770 selfLink:/api/v1/namespaces/namespace-1552683114-4935/pods/valid-pod uid:2a8a4d27-4764-11e9-8898-0242ac110002 creationTimestamp:2019-03-15T20:51:54Z labels:map[name:valid-pod] name:valid-pod] spec:map[terminationGracePeriodSeconds:30 containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[]] status:map[phase:Pending qosClass:Guaranteed] apiVersion:v1]
I0315 20:51:55.396] has:map has no entry for key "missing"
W0315 20:51:55.496] error: error executing jsonpath "{.missing}": missing is not found
W0315 20:51:55.497] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0315 20:51:56.476] E0315 20:51:56.475601   85483 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0315 20:51:56.576] Successful
I0315 20:51:56.577] message:NAME        READY     STATUS    RESTARTS   AGE
I0315 20:51:56.577] valid-pod   0/1       Pending   0          1s
I0315 20:51:56.577] has:STATUS
I0315 20:51:56.577] Successful
... skipping 78 lines ...
I0315 20:51:58.758]   terminationGracePeriodSeconds: 30
I0315 20:51:58.758] status:
I0315 20:51:58.758]   phase: Pending
I0315 20:51:58.758]   qosClass: Guaranteed
I0315 20:51:58.758] has:name: valid-pod
I0315 20:51:58.758] Successful
I0315 20:51:58.759] message:Error from server (NotFound): pods "invalid-pod" not found
I0315 20:51:58.759] has:"invalid-pod" not found
I0315 20:51:58.816] pod "valid-pod" deleted
I0315 20:51:58.905] test-cmd-util.sh:1659: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:51:59.043] pod/redis-master created
I0315 20:51:59.045] pod/valid-pod created
I0315 20:51:59.138] Successful
... skipping 237 lines ...
I0315 20:52:00.160] namespace-1552683108-18995   12s
I0315 20:52:00.160] namespace-1552683114-4935    6s
I0315 20:52:00.160] namespace-1552683119-30498   1s
I0315 20:52:00.160] has:application/json
W0315 20:52:00.288] I0315 20:52:00.288214   68826 controller.go:597] quota admission added evaluator for: {extensions daemonsets}
W0315 20:52:00.304] I0315 20:52:00.304510   68826 controller.go:597] quota admission added evaluator for: {apps controllerrevisions}
W0315 20:52:00.309] I0315 20:52:00.308764   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683119-30498", Name:"bind", UID:"2dd70c76-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"786", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:52:00.309] I0315 20:52:00.308801   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683119-30498", Name:"bind", UID:"2dd70c76-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"786", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:52:00.310] I0315 20:52:00.308816   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683119-30498", Name:"bind", UID:"2dd70c76-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"786", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:52:00.312] I0315 20:52:00.311841   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683119-30498", Name:"bind", UID:"2dd70c76-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"789", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:52:00.312] I0315 20:52:00.311880   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683119-30498", Name:"bind", UID:"2dd70c76-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"789", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:52:00.313] I0315 20:52:00.311892   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683119-30498", Name:"bind", UID:"2dd70c76-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"789", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
I0315 20:52:00.413] daemonset.extensions/bind created
I0315 20:52:00.413] test-cmd-util.sh:1404: Successful get ds {{range.items}}{{.metadata.name}}:{{end}}: bind:
I0315 20:52:00.551] Successful
I0315 20:52:00.552] message:NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR
I0315 20:52:00.552] bind 1 0 0 0 0 <none>
I0315 20:52:00.552] has:NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR
... skipping 57 lines ...
I0315 20:52:08.338] 
I0315 20:52:08.340] +++ Running case: test-cmd.run_create_secret_tests 
I0315 20:52:08.342] +++ working dir: /go/src/k8s.io/kubernetes
I0315 20:52:08.345] +++ command: run_create_secret_tests
I0315 20:52:08.440] Successful
I0315 20:52:08.441] message:No resources found.
I0315 20:52:08.441] Error from server (NotFound): secrets "mysecret" not found
I0315 20:52:08.441] has:secrets "mysecret" not found
I0315 20:52:08.602] Successful
I0315 20:52:08.603] message:No resources found.
I0315 20:52:08.603] Error from server (NotFound): secrets "mysecret" not found
I0315 20:52:08.603] has:secrets "mysecret" not found
I0315 20:52:08.604] Successful
I0315 20:52:08.604] message:user-specified
I0315 20:52:08.604] has:user-specified
I0315 20:52:08.677] Successful
I0315 20:52:08.754] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"32e2487f-4764-11e9-8898-0242ac110002","resourceVersion":"858","creationTimestamp":"2019-03-15T20:52:08Z"}}
... skipping 119 lines ...
I0315 20:52:13.158] Successful
I0315 20:52:13.159] message:kind.mygroup.example.com/myobj
I0315 20:52:13.159] has:kind.mygroup.example.com/myobj
I0315 20:52:13.239] Successful
I0315 20:52:13.239] message:kind.mygroup.example.com/myobj
I0315 20:52:13.239] has:kind.mygroup.example.com/myobj
W0315 20:52:13.340] E0315 20:52:11.849888   68826 autoregister_controller.go:190] v1alpha1.mygroup.example.com failed with : apiservices.apiregistration.k8s.io "v1alpha1.mygroup.example.com" already exists
W0315 20:52:13.340] I0315 20:52:13.057718   68826 controller.go:597] quota admission added evaluator for: {mygroup.example.com resources}
I0315 20:52:13.440] Successful
I0315 20:52:13.441] message:kind.mygroup.example.com/myobj
I0315 20:52:13.441] has:kind.mygroup.example.com/myobj
I0315 20:52:13.441] kind.mygroup.example.com "myobj" deleted
I0315 20:52:13.529] test-cmd-util.sh:2108: Successful get resources {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 114 lines ...
I0315 20:52:15.392] foo.company.com/test patched
I0315 20:52:15.487] test-cmd-util.sh:2143: Successful get foos/test {{.patched}}: value1
I0315 20:52:15.569] foo.company.com/test patched
I0315 20:52:15.664] test-cmd-util.sh:2145: Successful get foos/test {{.patched}}: value2
I0315 20:52:15.749] foo.company.com/test patched
I0315 20:52:15.845] test-cmd-util.sh:2147: Successful get foos/test {{.patched}}: <no value>
I0315 20:52:16.021] +++ [0315 20:52:16] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0315 20:52:16.110] {
I0315 20:52:16.110]     "apiVersion": "company.com/v1",
I0315 20:52:16.110]     "kind": "Foo",
I0315 20:52:16.110]     "metadata": {
I0315 20:52:16.110]         "annotations": {
I0315 20:52:16.111]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 112 lines ...
W0315 20:52:17.731] I0315 20:52:13.670723   68826 controller.go:597] quota admission added evaluator for: {company.com foos}
W0315 20:52:17.731] I0315 20:52:17.356064   68826 controller.go:597] quota admission added evaluator for: {company.com bars}
W0315 20:52:17.732] /go/src/k8s.io/kubernetes/hack/lib/test.sh: line 264: 87974 Killed                  while [ ${tries} -lt 10 ]; do
W0315 20:52:17.732]     tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
W0315 20:52:17.732] done
W0315 20:52:17.732] /go/src/k8s.io/kubernetes/hack/make-rules/test-cmd-util.sh: line 2201: 87973 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0315 20:52:19.017] E0315 20:52:19.016256   73057 resource_quota_controller.go:460] failed to sync resource monitors: [couldn't start monitor for resource {"company.com" "v1" "bars"}: unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource {"company.com" "v1" "validfoos"}: unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource {"mygroup.example.com" "v1alpha1" "resources"}: unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource {"company.com" "v1" "foos"}: unable to monitor quota for resource "company.com/v1, Resource=foos"]
W0315 20:52:19.148] I0315 20:52:19.148303   73057 controller_utils.go:1025] Waiting for caches to sync for garbage collector controller
W0315 20:52:19.249] I0315 20:52:19.248690   73057 controller_utils.go:1032] Caches are synced for garbage collector controller
I0315 20:52:19.354] test-cmd-util.sh:2227: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:52:19.517] foo.company.com/test created
I0315 20:52:19.629] test-cmd-util.sh:2233: Successful get foos {{range.items}}{{.metadata.name}}:{{end}}: test:
I0315 20:52:19.728] test-cmd-util.sh:2236: Successful get foos/test {{.someField}}: field1
... skipping 59 lines ...
I0315 20:52:25.299] test-cmd-util.sh:2362: Successful get bars {{len .items}}: 1
I0315 20:52:25.373] namespace "non-native-resources" deleted
I0315 20:52:30.547] test-cmd-util.sh:2365: Successful get bars {{len .items}}: 0
I0315 20:52:30.715] customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I0315 20:52:30.814] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
W0315 20:52:30.914] No resources found.
W0315 20:52:30.915] Error from server (NotFound): namespaces "non-native-resources" not found
I0315 20:52:31.015] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0315 20:52:31.029] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0315 20:52:31.055] +++ exit code: 0
I0315 20:52:31.101] Recording: run_cmd_with_img_tests
I0315 20:52:31.101] Running command: run_cmd_with_img_tests
I0315 20:52:31.117] 
... skipping 6 lines ...
I0315 20:52:31.298] +++ [0315 20:52:31] Testing cmd with image
I0315 20:52:31.389] Successful
I0315 20:52:31.389] message:deployment.apps/test1 created
I0315 20:52:31.389] has:deployment.apps/test1 created
I0315 20:52:31.475] deployment.extensions "test1" deleted
I0315 20:52:31.555] Successful
I0315 20:52:31.555] message:error: Invalid image name "InvalidImageName": invalid reference format
I0315 20:52:31.555] has:error: Invalid image name "InvalidImageName": invalid reference format
I0315 20:52:31.566] +++ exit code: 0
I0315 20:52:31.600] Recording: run_recursive_resources_tests
I0315 20:52:31.600] Running command: run_recursive_resources_tests
I0315 20:52:31.616] 
I0315 20:52:31.618] +++ Running case: test-cmd.run_recursive_resources_tests 
I0315 20:52:31.620] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 4 lines ...
I0315 20:52:31.770] Context "test" modified.
I0315 20:52:31.856] test-cmd-util.sh:2385: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:52:32.084] test-cmd-util.sh:2389: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:32.086] Successful
I0315 20:52:32.086] message:pod/busybox0 created
I0315 20:52:32.086] pod/busybox1 created
I0315 20:52:32.087] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0315 20:52:32.087] has:error validating data: kind not set
I0315 20:52:32.167] test-cmd-util.sh:2394: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:32.332] test-cmd-util.sh:2402: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0315 20:52:32.334] Successful
I0315 20:52:32.334] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 20:52:32.334] has:Object 'Kind' is missing
I0315 20:52:32.418] test-cmd-util.sh:2409: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:32.658] test-cmd-util.sh:2413: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0315 20:52:32.660] Successful
I0315 20:52:32.660] message:pod/busybox0 replaced
I0315 20:52:32.660] pod/busybox1 replaced
I0315 20:52:32.660] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0315 20:52:32.660] has:error validating data: kind not set
I0315 20:52:32.763] test-cmd-util.sh:2418: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:32.854] Successful
I0315 20:52:32.854] message:Name:               busybox0
I0315 20:52:32.855] Namespace:          namespace-1552683151-13612
I0315 20:52:32.855] Priority:           0
I0315 20:52:32.855] PriorityClassName:  <none>
... skipping 159 lines ...
I0315 20:52:32.873] has:Object 'Kind' is missing
I0315 20:52:32.958] test-cmd-util.sh:2428: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:33.133] test-cmd-util.sh:2432: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0315 20:52:33.135] Successful
I0315 20:52:33.135] message:pod/busybox0 annotated
I0315 20:52:33.135] pod/busybox1 annotated
I0315 20:52:33.135] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 20:52:33.135] has:Object 'Kind' is missing
I0315 20:52:33.216] test-cmd-util.sh:2437: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:33.451] test-cmd-util.sh:2441: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0315 20:52:33.453] Successful
I0315 20:52:33.453] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0315 20:52:33.454] pod/busybox0 configured
I0315 20:52:33.454] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0315 20:52:33.454] pod/busybox1 configured
I0315 20:52:33.454] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0315 20:52:33.454] has:error validating data: kind not set
I0315 20:52:33.537] test-cmd-util.sh:2447: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:52:33.678] deployment.extensions/nginx created
I0315 20:52:33.775] test-cmd-util.sh:2451: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
I0315 20:52:33.858] test-cmd-util.sh:2452: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0315 20:52:34.027] test-cmd-util.sh:2456: Successful get deployment nginx {{ .apiVersion }}: extensions/v1beta1
I0315 20:52:34.028] Successful
... skipping 39 lines ...
I0315 20:52:34.033] status: {}
I0315 20:52:34.033] has:apps/v1beta1
I0315 20:52:34.104] deployment.extensions "nginx" deleted
I0315 20:52:34.193] test-cmd-util.sh:2463: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:34.342] test-cmd-util.sh:2467: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:34.343] Successful
I0315 20:52:34.344] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 20:52:34.344] has:Object 'Kind' is missing
I0315 20:52:34.424] test-cmd-util.sh:2472: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:34.509] Successful
I0315 20:52:34.510] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 20:52:34.510] has:busybox0:busybox1:
I0315 20:52:34.511] Successful
I0315 20:52:34.511] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 20:52:34.511] has:Object 'Kind' is missing
I0315 20:52:34.597] test-cmd-util.sh:2481: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:34.678] pod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 20:52:34.764] test-cmd-util.sh:2486: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0315 20:52:34.766] Successful
I0315 20:52:34.766] message:pod/busybox0 labeled
I0315 20:52:34.766] pod/busybox1 labeled
I0315 20:52:34.766] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 20:52:34.766] has:Object 'Kind' is missing
I0315 20:52:34.849] test-cmd-util.sh:2491: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:34.930] pod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 20:52:35.016] test-cmd-util.sh:2496: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0315 20:52:35.018] Successful
I0315 20:52:35.018] message:pod/busybox0 patched
I0315 20:52:35.018] pod/busybox1 patched
I0315 20:52:35.019] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 20:52:35.019] has:Object 'Kind' is missing
I0315 20:52:35.100] test-cmd-util.sh:2501: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:35.266] test-cmd-util.sh:2505: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:52:35.268] Successful
I0315 20:52:35.268] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0315 20:52:35.268] pod "busybox0" force deleted
I0315 20:52:35.268] pod "busybox1" force deleted
I0315 20:52:35.269] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0315 20:52:35.269] has:Object 'Kind' is missing
I0315 20:52:35.360] test-cmd-util.sh:2510: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:52:35.493] replicationcontroller/busybox0 created
I0315 20:52:35.496] replicationcontroller/busybox1 created
I0315 20:52:35.588] test-cmd-util.sh:2514: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:35.670] test-cmd-util.sh:2519: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:35.750] test-cmd-util.sh:2520: Successful get rc busybox0 {{.spec.replicas}}: 1
I0315 20:52:35.832] test-cmd-util.sh:2521: Successful get rc busybox1 {{.spec.replicas}}: 1
I0315 20:52:36.003] test-cmd-util.sh:2526: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0315 20:52:36.089] test-cmd-util.sh:2527: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0315 20:52:36.091] Successful
I0315 20:52:36.091] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0315 20:52:36.091] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0315 20:52:36.092] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 20:52:36.092] has:Object 'Kind' is missing
I0315 20:52:36.166] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0315 20:52:36.262] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0315 20:52:36.356] test-cmd-util.sh:2535: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:36.439] test-cmd-util.sh:2536: Successful get rc busybox0 {{.spec.replicas}}: 1
I0315 20:52:36.523] test-cmd-util.sh:2537: Successful get rc busybox1 {{.spec.replicas}}: 1
I0315 20:52:36.697] test-cmd-util.sh:2541: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0315 20:52:36.782] test-cmd-util.sh:2542: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0315 20:52:36.784] Successful
I0315 20:52:36.784] message:service/busybox0 exposed
I0315 20:52:36.784] service/busybox1 exposed
I0315 20:52:36.784] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 20:52:36.784] has:Object 'Kind' is missing
I0315 20:52:36.866] test-cmd-util.sh:2548: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:36.947] test-cmd-util.sh:2549: Successful get rc busybox0 {{.spec.replicas}}: 1
I0315 20:52:37.029] test-cmd-util.sh:2550: Successful get rc busybox1 {{.spec.replicas}}: 1
I0315 20:52:37.225] test-cmd-util.sh:2554: Successful get rc busybox0 {{.spec.replicas}}: 2
I0315 20:52:37.310] test-cmd-util.sh:2555: Successful get rc busybox1 {{.spec.replicas}}: 2
I0315 20:52:37.312] Successful
I0315 20:52:37.312] message:replicationcontroller/busybox0 scaled
I0315 20:52:37.312] replicationcontroller/busybox1 scaled
I0315 20:52:37.312] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 20:52:37.312] has:Object 'Kind' is missing
I0315 20:52:37.402] test-cmd-util.sh:2560: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0315 20:52:37.585] test-cmd-util.sh:2564: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:52:37.587] Successful
I0315 20:52:37.587] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0315 20:52:37.587] replicationcontroller "busybox0" force deleted
I0315 20:52:37.588] replicationcontroller "busybox1" force deleted
I0315 20:52:37.588] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 20:52:37.588] has:Object 'Kind' is missing
I0315 20:52:37.681] test-cmd-util.sh:2569: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:52:37.828] deployment.extensions/nginx1-deployment created
I0315 20:52:37.831] deployment.extensions/nginx0-deployment created
W0315 20:52:37.932] I0315 20:52:31.379505   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683151-28607", Name:"test1", UID:"405e6bdb-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"963", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-7f54676899 to 1
W0315 20:52:37.932] I0315 20:52:31.383092   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683151-28607", Name:"test1-7f54676899", UID:"405eea0c-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"964", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-7f54676899-gndz8
W0315 20:52:37.932] I0315 20:52:33.680581   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683151-13612", Name:"nginx", UID:"41bd9261-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"988", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-794c6b99b4 to 3
W0315 20:52:37.933] I0315 20:52:33.683354   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683151-13612", Name:"nginx-794c6b99b4", UID:"41be10b4-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"989", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-794c6b99b4-4mjcj
W0315 20:52:37.933] I0315 20:52:33.685921   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683151-13612", Name:"nginx-794c6b99b4", UID:"41be10b4-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"989", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-794c6b99b4-7p87q
W0315 20:52:37.934] I0315 20:52:33.686323   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683151-13612", Name:"nginx-794c6b99b4", UID:"41be10b4-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"989", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-794c6b99b4-cfkfr
W0315 20:52:37.934] I0315 20:52:35.461323   73057 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0315 20:52:37.934] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0315 20:52:37.935] I0315 20:52:35.496777   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683151-13612", Name:"busybox0", UID:"42d28edd-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1019", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-4m5tk
W0315 20:52:37.935] I0315 20:52:35.499372   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683151-13612", Name:"busybox1", UID:"42d32880-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1021", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-5hbht
W0315 20:52:37.935] I0315 20:52:37.115575   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683151-13612", Name:"busybox0", UID:"42d28edd-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1040", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-jsxx9
W0315 20:52:37.935] I0315 20:52:37.122227   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683151-13612", Name:"busybox1", UID:"42d32880-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1045", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-f7hw4
W0315 20:52:37.936] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0315 20:52:37.936] I0315 20:52:37.830952   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683151-13612", Name:"nginx1-deployment", UID:"4436d590-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1060", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-5dc485c78 to 2
W0315 20:52:37.936] I0315 20:52:37.834472   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683151-13612", Name:"nginx0-deployment", UID:"443764ec-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1062", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-76db6cfd79 to 2
W0315 20:52:37.937] I0315 20:52:37.835148   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683151-13612", Name:"nginx1-deployment-5dc485c78", UID:"44375593-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1061", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-5dc485c78-4d2wg
W0315 20:52:37.937] I0315 20:52:37.841950   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683151-13612", Name:"nginx1-deployment-5dc485c78", UID:"44375593-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1061", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-5dc485c78-qmtwd
W0315 20:52:37.937] I0315 20:52:37.850404   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683151-13612", Name:"nginx0-deployment-76db6cfd79", UID:"4437ea02-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1065", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-76db6cfd79-kzdmz
W0315 20:52:37.937] I0315 20:52:37.852573   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683151-13612", Name:"nginx0-deployment-76db6cfd79", UID:"4437ea02-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1065", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-76db6cfd79-q9t8w
I0315 20:52:38.038] test-cmd-util.sh:2573: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0315 20:52:38.038] test-cmd-util.sh:2574: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0315 20:52:38.357] test-cmd-util.sh:2578: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0315 20:52:38.359] Successful
I0315 20:52:38.359] message:deployment.extensions/nginx1-deployment
I0315 20:52:38.359] deployment.extensions/nginx0-deployment
I0315 20:52:38.359] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 20:52:38.359] has:Object 'Kind' is missing
I0315 20:52:38.444] deployment.extensions/nginx1-deployment paused
I0315 20:52:38.446] deployment.extensions/nginx0-deployment paused
I0315 20:52:38.538] test-cmd-util.sh:2585: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0315 20:52:38.540] Successful
I0315 20:52:38.540] message:error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 20:52:38.540] has:Object 'Kind' is missing
I0315 20:52:38.621] deployment.extensions/nginx1-deployment resumed
I0315 20:52:38.624] deployment.extensions/nginx0-deployment resumed
I0315 20:52:38.718] test-cmd-util.sh:2591: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
I0315 20:52:38.720] Successful
I0315 20:52:38.720] message:error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 20:52:38.721] has:Object 'Kind' is missing
W0315 20:52:38.821] I0315 20:52:38.123686   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683151-13612", Name:"nginx1-deployment", UID:"4436d590-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1085", FieldPath:""}): type: 'Warning' reason: 'DeploymentRollbackTemplateUnchanged' The rollback revision contains the same template as current deployment "nginx1-deployment"
W0315 20:52:38.822] I0315 20:52:38.170139   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683151-13612", Name:"nginx0-deployment", UID:"443764ec-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1089", FieldPath:""}): type: 'Warning' reason: 'DeploymentRollbackTemplateUnchanged' The rollback revision contains the same template as current deployment "nginx0-deployment"
W0315 20:52:38.898] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0315 20:52:38.909] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 20:52:39.010] Successful
I0315 20:52:39.010] message:deployments "nginx1-deployment"
I0315 20:52:39.010] REVISION  CHANGE-CAUSE
I0315 20:52:39.010] 1         <none>
I0315 20:52:39.010] 
I0315 20:52:39.010] deployments "nginx0-deployment"
I0315 20:52:39.011] REVISION  CHANGE-CAUSE
I0315 20:52:39.011] 1         <none>
I0315 20:52:39.011] 
I0315 20:52:39.011] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 20:52:39.011] has:nginx0-deployment
I0315 20:52:39.011] Successful
I0315 20:52:39.011] message:deployments "nginx1-deployment"
I0315 20:52:39.012] REVISION  CHANGE-CAUSE
I0315 20:52:39.012] 1         <none>
I0315 20:52:39.012] 
I0315 20:52:39.012] deployments "nginx0-deployment"
I0315 20:52:39.012] REVISION  CHANGE-CAUSE
I0315 20:52:39.012] 1         <none>
I0315 20:52:39.012] 
I0315 20:52:39.013] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 20:52:39.013] has:nginx1-deployment
I0315 20:52:39.013] Successful
I0315 20:52:39.013] message:deployments "nginx1-deployment"
I0315 20:52:39.013] REVISION  CHANGE-CAUSE
I0315 20:52:39.013] 1         <none>
I0315 20:52:39.013] 
I0315 20:52:39.013] deployments "nginx0-deployment"
I0315 20:52:39.013] REVISION  CHANGE-CAUSE
I0315 20:52:39.013] 1         <none>
I0315 20:52:39.013] 
I0315 20:52:39.014] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"extensions/v1beta1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0315 20:52:39.014] has:Object 'Kind' is missing
I0315 20:52:39.014] deployment.extensions "nginx1-deployment" force deleted
I0315 20:52:39.014] deployment.extensions "nginx0-deployment" force deleted
I0315 20:52:40.000] test-cmd-util.sh:2607: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:52:40.135] replicationcontroller/busybox0 created
I0315 20:52:40.138] replicationcontroller/busybox1 created
... skipping 7 lines ...
I0315 20:52:40.310] message:no rollbacker has been implemented for {"" "ReplicationController"}
I0315 20:52:40.310] no rollbacker has been implemented for {"" "ReplicationController"}
I0315 20:52:40.311] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 20:52:40.311] has:Object 'Kind' is missing
I0315 20:52:40.395] Successful
I0315 20:52:40.396] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 20:52:40.396] error: replicationcontrollers "busybox0" pausing is not supported
I0315 20:52:40.396] error: replicationcontrollers "busybox1" pausing is not supported
I0315 20:52:40.396] has:Object 'Kind' is missing
I0315 20:52:40.397] Successful
I0315 20:52:40.397] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 20:52:40.397] error: replicationcontrollers "busybox0" pausing is not supported
I0315 20:52:40.397] error: replicationcontrollers "busybox1" pausing is not supported
I0315 20:52:40.397] has:replicationcontrollers "busybox0" pausing is not supported
I0315 20:52:40.398] Successful
I0315 20:52:40.399] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 20:52:40.399] error: replicationcontrollers "busybox0" pausing is not supported
I0315 20:52:40.399] error: replicationcontrollers "busybox1" pausing is not supported
I0315 20:52:40.399] has:replicationcontrollers "busybox1" pausing is not supported
I0315 20:52:40.484] Successful
I0315 20:52:40.484] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 20:52:40.485] error: replicationcontrollers "busybox0" resuming is not supported
I0315 20:52:40.485] error: replicationcontrollers "busybox1" resuming is not supported
I0315 20:52:40.485] has:Object 'Kind' is missing
I0315 20:52:40.486] Successful
I0315 20:52:40.486] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 20:52:40.486] error: replicationcontrollers "busybox0" resuming is not supported
I0315 20:52:40.486] error: replicationcontrollers "busybox1" resuming is not supported
I0315 20:52:40.487] has:replicationcontrollers "busybox0" resuming is not supported
I0315 20:52:40.487] Successful
I0315 20:52:40.488] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 20:52:40.488] error: replicationcontrollers "busybox0" resuming is not supported
I0315 20:52:40.488] error: replicationcontrollers "busybox1" resuming is not supported
I0315 20:52:40.488] has:replicationcontrollers "busybox0" resuming is not supported
I0315 20:52:40.559] replicationcontroller "busybox0" force deleted
I0315 20:52:40.564] replicationcontroller "busybox1" force deleted
W0315 20:52:40.665] I0315 20:52:40.137799   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683151-13612", Name:"busybox0", UID:"4596ecf1-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1115", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-4r4q9
W0315 20:52:40.666] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0315 20:52:40.666] I0315 20:52:40.140493   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683151-13612", Name:"busybox1", UID:"4597824c-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1117", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-6zr8w
W0315 20:52:40.666] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0315 20:52:40.667] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0315 20:52:41.588] +++ exit code: 0
I0315 20:52:42.836] Recording: run_namespace_tests
I0315 20:52:42.836] Running command: run_namespace_tests
I0315 20:52:42.853] 
I0315 20:52:42.855] +++ Running case: test-cmd.run_namespace_tests 
I0315 20:52:42.857] +++ working dir: /go/src/k8s.io/kubernetes
... skipping 2 lines ...
I0315 20:52:42.937] namespace/my-namespace created
I0315 20:52:43.027] test-cmd-util.sh:2650: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0315 20:52:43.119] namespace "my-namespace" deleted
I0315 20:52:48.210] namespace/my-namespace condition met
I0315 20:52:48.301] Successful
I0315 20:52:48.302] message:No resources found.
I0315 20:52:48.302] Error from server (NotFound): namespaces "my-namespace" not found
I0315 20:52:48.302] has: not found
I0315 20:52:48.417] test-cmd-util.sh:2665: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0315 20:52:48.492] namespace/other created
I0315 20:52:48.582] test-cmd-util.sh:2669: Successful get namespaces/other {{.metadata.name}}: other
I0315 20:52:48.671] test-cmd-util.sh:2673: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:52:48.817] pod/valid-pod created
I0315 20:52:48.916] test-cmd-util.sh:2677: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 20:52:49.009] test-cmd-util.sh:2679: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 20:52:49.096] Successful
I0315 20:52:49.096] message:error: a resource cannot be retrieved by name across all namespaces
I0315 20:52:49.096] has:a resource cannot be retrieved by name across all namespaces
I0315 20:52:49.190] test-cmd-util.sh:2686: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0315 20:52:49.275] pod "valid-pod" force deleted
I0315 20:52:49.372] test-cmd-util.sh:2690: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:52:49.453] namespace "other" deleted
W0315 20:52:49.554] I0315 20:52:49.022713   73057 controller_utils.go:1025] Waiting for caches to sync for resource quota controller
... skipping 116 lines ...
I0315 20:53:10.493] +++ command: run_client_config_tests
I0315 20:53:10.504] +++ [0315 20:53:10] Creating namespace namespace-1552683190-32013
I0315 20:53:10.576] namespace/namespace-1552683190-32013 created
I0315 20:53:10.652] Context "test" modified.
I0315 20:53:10.657] +++ [0315 20:53:10] Testing client config
I0315 20:53:10.734] Successful
I0315 20:53:10.734] message:error: stat missing: no such file or directory
I0315 20:53:10.734] has:missing: no such file or directory
I0315 20:53:10.810] Successful
I0315 20:53:10.810] message:error: stat missing: no such file or directory
I0315 20:53:10.810] has:missing: no such file or directory
I0315 20:53:10.884] Successful
I0315 20:53:10.884] message:error: stat missing: no such file or directory
I0315 20:53:10.884] has:missing: no such file or directory
I0315 20:53:10.959] Successful
I0315 20:53:10.960] message:Error in configuration: context was not found for specified context: missing-context
I0315 20:53:10.960] has:context was not found for specified context: missing-context
I0315 20:53:11.035] Successful
I0315 20:53:11.035] message:error: no server found for cluster "missing-cluster"
I0315 20:53:11.036] has:no server found for cluster "missing-cluster"
I0315 20:53:11.120] Successful
I0315 20:53:11.120] message:auth info "missing-user" does not exist
I0315 20:53:11.120] auth info "missing-user" does not exist
I0315 20:53:11.120] has:auth info "missing-user" does not exist
I0315 20:53:11.278] Successful
I0315 20:53:11.278] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1"
I0315 20:53:11.279] has:Error loading config file
I0315 20:53:11.354] Successful
I0315 20:53:11.355] message:error: stat missing-config: no such file or directory
I0315 20:53:11.355] has:no such file or directory
I0315 20:53:11.369] +++ exit code: 0
I0315 20:53:11.403] Recording: run_service_accounts_tests
I0315 20:53:11.403] Running command: run_service_accounts_tests
I0315 20:53:11.423] 
I0315 20:53:11.425] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 76 lines ...
I0315 20:53:18.726]                 job-name=test-job
I0315 20:53:18.726]                 run=pi
I0315 20:53:18.726] Annotations:    cronjob.kubernetes.io/instantiate=manual
I0315 20:53:18.726] Parallelism:    1
I0315 20:53:18.726] Completions:    1
I0315 20:53:18.726] Start Time:     Fri, 15 Mar 2019 20:53:18 +0000
I0315 20:53:18.726] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0315 20:53:18.727] Pod Template:
I0315 20:53:18.727]   Labels:  controller-uid=5c6ea08d-4764-11e9-8898-0242ac110002
I0315 20:53:18.727]            job-name=test-job
I0315 20:53:18.727]            run=pi
I0315 20:53:18.727]   Containers:
I0315 20:53:18.727]    pi:
... skipping 304 lines ...
I0315 20:53:27.165]   selector:
I0315 20:53:27.165]     role: padawan
I0315 20:53:27.165]   sessionAffinity: None
I0315 20:53:27.165]   type: ClusterIP
I0315 20:53:27.165] status:
I0315 20:53:27.165]   loadBalancer: {}
W0315 20:53:27.265] error: you must specify resources by --filename when --local is set.
W0315 20:53:27.266] Example resource specifications include:
W0315 20:53:27.266]    '-f rsrc.yaml'
W0315 20:53:27.266]    '--filename=rsrc.json'
I0315 20:53:27.366] test-cmd-util.sh:2890: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0315 20:53:27.506] test-cmd-util.sh:2897: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0315 20:53:27.592] service "redis-master" deleted
... skipping 42 lines ...
I0315 20:53:30.611] Context "test" modified.
I0315 20:53:30.618] +++ [0315 20:53:30] Testing kubectl(v1:daemonsets)
I0315 20:53:30.729] test-cmd-util.sh:3650: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:53:30.897] daemonset.extensions/bind created
I0315 20:53:31.002] test-cmd-util.sh:3654: Successful get daemonsets bind {{.spec.templateGeneration}}: 1
I0315 20:53:31.163] daemonset.extensions/bind configured
W0315 20:53:31.264] I0315 20:53:30.902345   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1278", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:31.265] I0315 20:53:30.902378   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1278", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:31.265] I0315 20:53:30.902387   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1278", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:31.265] I0315 20:53:30.905930   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1281", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:31.266] I0315 20:53:30.905960   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1281", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:31.266] I0315 20:53:30.905970   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1281", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
I0315 20:53:31.367] test-cmd-util.sh:3657: Successful get daemonsets bind {{.spec.templateGeneration}}: 1
I0315 20:53:31.368] daemonset.extensions/bind image updated
I0315 20:53:31.470] test-cmd-util.sh:3660: Successful get daemonsets bind {{.spec.templateGeneration}}: 2
I0315 20:53:31.570] daemonset.extensions/bind env updated
I0315 20:53:31.671] test-cmd-util.sh:3662: Successful get daemonsets bind {{.spec.templateGeneration}}: 3
I0315 20:53:31.763] daemonset.extensions/bind resource requirements updated
... skipping 39 lines ...
I0315 20:53:34.492] test-cmd-util.sh:3700: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0315 20:53:34.574] test-cmd-util.sh:3701: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0315 20:53:34.668] daemonset.extensions/bind rolled back
I0315 20:53:34.755] test-cmd-util.sh:3704: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0315 20:53:34.839] test-cmd-util.sh:3705: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0315 20:53:34.924] Successful
I0315 20:53:34.925] message:error: unable to find specified revision 1000000 in history
I0315 20:53:34.925] has:unable to find specified revision
I0315 20:53:35.002] test-cmd-util.sh:3709: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0315 20:53:35.084] test-cmd-util.sh:3710: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0315 20:53:35.177] daemonset.extensions/bind rolled back
I0315 20:53:35.265] test-cmd-util.sh:3713: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0315 20:53:35.348] test-cmd-util.sh:3714: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0315 20:53:37.530] Namespace:    namespace-1552683216-32534
I0315 20:53:37.530] Selector:     app=guestbook,tier=frontend
I0315 20:53:37.530] Labels:       app=guestbook
I0315 20:53:37.530]               tier=frontend
I0315 20:53:37.530] Annotations:  <none>
I0315 20:53:37.531] Replicas:     3 current / 3 desired
I0315 20:53:37.531] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 20:53:37.531] Pod Template:
I0315 20:53:37.531]   Labels:  app=guestbook
I0315 20:53:37.531]            tier=frontend
I0315 20:53:37.531]   Containers:
I0315 20:53:37.531]    php-redis:
I0315 20:53:37.531]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0315 20:53:37.626] Namespace:    namespace-1552683216-32534
I0315 20:53:37.626] Selector:     app=guestbook,tier=frontend
I0315 20:53:37.626] Labels:       app=guestbook
I0315 20:53:37.626]               tier=frontend
I0315 20:53:37.626] Annotations:  <none>
I0315 20:53:37.626] Replicas:     3 current / 3 desired
I0315 20:53:37.626] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 20:53:37.626] Pod Template:
I0315 20:53:37.626]   Labels:  app=guestbook
I0315 20:53:37.626]            tier=frontend
I0315 20:53:37.627]   Containers:
I0315 20:53:37.627]    php-redis:
I0315 20:53:37.627]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0315 20:53:37.722] Namespace:    namespace-1552683216-32534
I0315 20:53:37.722] Selector:     app=guestbook,tier=frontend
I0315 20:53:37.723] Labels:       app=guestbook
I0315 20:53:37.723]               tier=frontend
I0315 20:53:37.723] Annotations:  <none>
I0315 20:53:37.723] Replicas:     3 current / 3 desired
I0315 20:53:37.723] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 20:53:37.723] Pod Template:
I0315 20:53:37.723]   Labels:  app=guestbook
I0315 20:53:37.723]            tier=frontend
I0315 20:53:37.723]   Containers:
I0315 20:53:37.724]    php-redis:
I0315 20:53:37.724]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0315 20:53:37.826] Namespace:    namespace-1552683216-32534
I0315 20:53:37.826] Selector:     app=guestbook,tier=frontend
I0315 20:53:37.826] Labels:       app=guestbook
I0315 20:53:37.826]               tier=frontend
I0315 20:53:37.826] Annotations:  <none>
I0315 20:53:37.827] Replicas:     3 current / 3 desired
I0315 20:53:37.827] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 20:53:37.827] Pod Template:
I0315 20:53:37.827]   Labels:  app=guestbook
I0315 20:53:37.827]            tier=frontend
I0315 20:53:37.827]   Containers:
I0315 20:53:37.827]    php-redis:
I0315 20:53:37.827]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0315 20:53:37.829]   Type    Reason            Age   From                    Message
I0315 20:53:37.829]   ----    ------            ----  ----                    -------
I0315 20:53:37.829]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-k9ggq
I0315 20:53:37.829]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-s5vpz
I0315 20:53:37.829]   Normal  SuccessfulCreate  0s    replication-controller  Created pod: frontend-2p5b5
I0315 20:53:37.830] 
W0315 20:53:37.930] I0315 20:53:31.372151   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1287", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.931] I0315 20:53:31.372209   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1287", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.931] I0315 20:53:31.372398   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1287", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.931] I0315 20:53:31.375975   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1289", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.932] I0315 20:53:31.376564   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1289", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.932] I0315 20:53:31.376588   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1289", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.932] I0315 20:53:31.573790   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1296", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.932] I0315 20:53:31.573836   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1296", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.933] I0315 20:53:31.573855   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1296", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.933] I0315 20:53:31.577701   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1298", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.933] I0315 20:53:31.577738   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1298", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.934] I0315 20:53:31.577751   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1298", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.934] I0315 20:53:31.767315   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1305", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.934] I0315 20:53:31.767374   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1305", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.935] I0315 20:53:31.767384   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1305", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.935] I0315 20:53:31.771384   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1307", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.935] I0315 20:53:31.771435   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1307", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.936] I0315 20:53:31.771495   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683210-8840", Name:"bind", UID:"63d88d13-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1307", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.936] I0315 20:53:33.414584   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1326", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.936] I0315 20:53:33.414620   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1326", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.937] I0315 20:53:33.414720   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1326", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.937] I0315 20:53:33.417290   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1329", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.937] I0315 20:53:33.417319   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1329", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.938] I0315 20:53:33.417331   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1329", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.938] I0315 20:53:33.910435   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1335", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.938] I0315 20:53:33.910470   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1335", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.939] I0315 20:53:33.910548   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1335", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.939] I0315 20:53:33.913704   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1337", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.939] I0315 20:53:33.913845   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1337", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.940] I0315 20:53:33.913873   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1337", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.940] I0315 20:53:34.668469   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1345", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.941] I0315 20:53:34.668513   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1345", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.941] I0315 20:53:34.668527   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1345", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.941] I0315 20:53:34.671904   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1347", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.942] I0315 20:53:34.671939   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1347", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.942] I0315 20:53:34.671951   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1347", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.943] I0315 20:53:35.176642   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1354", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.943] I0315 20:53:35.177025   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1354", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.943] I0315 20:53:35.177081   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1354", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.944] I0315 20:53:35.179857   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1354", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.944] I0315 20:53:35.179925   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1354", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.944] I0315 20:53:35.179978   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1354", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.947] E0315 20:53:35.183468   73057 daemon_controller.go:285] namespace-1552683213-13554/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1552683213-13554", SelfLink:"/apis/apps/v1/namespaces/namespace-1552683213-13554/daemonsets/bind", UID:"65584564-4764-11e9-8898-0242ac110002", ResourceVersion:"1354", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63688280013, loc:(*time.Location)(0x56ee260)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"name\":\"bind\",\"namespace\":\"namespace-1552683213-13554\"},\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc42484e5e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc424854648), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc424063ce0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc42484e640), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc423b721e8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc4248546c0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0315 20:53:37.948] I0315 20:53:35.184339   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1356", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.948] I0315 20:53:35.184371   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1356", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.948] I0315 20:53:35.184385   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1356", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.949] I0315 20:53:35.189664   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1356", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.949] I0315 20:53:35.189710   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1356", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.949] I0315 20:53:35.189746   73057 event.go:221] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"namespace-1552683213-13554", Name:"bind", UID:"65584564-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1356", FieldPath:""}): type: 'Warning' reason: 'FailedPlacement' failed to place pod on "127.0.0.1": Node didn't have enough resource: pods, requested: 1, used: 0, capacity: 0
W0315 20:53:37.950] I0315 20:53:36.941355   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683216-32534", Name:"frontend", UID:"677235da-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1374", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-bp9r4
W0315 20:53:37.950] I0315 20:53:36.943618   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683216-32534", Name:"frontend", UID:"677235da-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1374", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gmbc8
W0315 20:53:37.950] I0315 20:53:36.944043   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683216-32534", Name:"frontend", UID:"677235da-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1374", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jh5vn
W0315 20:53:37.951] I0315 20:53:37.317641   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683216-32534", Name:"frontend", UID:"67abc750-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1390", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-k9ggq
W0315 20:53:37.951] I0315 20:53:37.320161   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683216-32534", Name:"frontend", UID:"67abc750-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1390", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-s5vpz
W0315 20:53:37.951] I0315 20:53:37.320236   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683216-32534", Name:"frontend", UID:"67abc750-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1390", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2p5b5
... skipping 2 lines ...
I0315 20:53:38.052] Namespace:    namespace-1552683216-32534
I0315 20:53:38.052] Selector:     app=guestbook,tier=frontend
I0315 20:53:38.053] Labels:       app=guestbook
I0315 20:53:38.053]               tier=frontend
I0315 20:53:38.053] Annotations:  <none>
I0315 20:53:38.053] Replicas:     3 current / 3 desired
I0315 20:53:38.053] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 20:53:38.053] Pod Template:
I0315 20:53:38.053]   Labels:  app=guestbook
I0315 20:53:38.053]            tier=frontend
I0315 20:53:38.054]   Containers:
I0315 20:53:38.054]    php-redis:
I0315 20:53:38.054]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0315 20:53:38.065] Namespace:    namespace-1552683216-32534
I0315 20:53:38.065] Selector:     app=guestbook,tier=frontend
I0315 20:53:38.066] Labels:       app=guestbook
I0315 20:53:38.066]               tier=frontend
I0315 20:53:38.066] Annotations:  <none>
I0315 20:53:38.066] Replicas:     3 current / 3 desired
I0315 20:53:38.066] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 20:53:38.066] Pod Template:
I0315 20:53:38.066]   Labels:  app=guestbook
I0315 20:53:38.066]            tier=frontend
I0315 20:53:38.067]   Containers:
I0315 20:53:38.067]    php-redis:
I0315 20:53:38.067]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0315 20:53:38.165] Namespace:    namespace-1552683216-32534
I0315 20:53:38.165] Selector:     app=guestbook,tier=frontend
I0315 20:53:38.165] Labels:       app=guestbook
I0315 20:53:38.165]               tier=frontend
I0315 20:53:38.165] Annotations:  <none>
I0315 20:53:38.165] Replicas:     3 current / 3 desired
I0315 20:53:38.165] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 20:53:38.166] Pod Template:
I0315 20:53:38.166]   Labels:  app=guestbook
I0315 20:53:38.166]            tier=frontend
I0315 20:53:38.166]   Containers:
I0315 20:53:38.166]    php-redis:
I0315 20:53:38.166]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0315 20:53:38.266] Namespace:    namespace-1552683216-32534
I0315 20:53:38.266] Selector:     app=guestbook,tier=frontend
I0315 20:53:38.266] Labels:       app=guestbook
I0315 20:53:38.266]               tier=frontend
I0315 20:53:38.266] Annotations:  <none>
I0315 20:53:38.266] Replicas:     3 current / 3 desired
I0315 20:53:38.266] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0315 20:53:38.267] Pod Template:
I0315 20:53:38.267]   Labels:  app=guestbook
I0315 20:53:38.267]            tier=frontend
I0315 20:53:38.267]   Containers:
I0315 20:53:38.267]    php-redis:
I0315 20:53:38.267]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0315 20:53:39.062] test-cmd-util.sh:3065: Successful get rc frontend {{.spec.replicas}}: 3
I0315 20:53:39.155] test-cmd-util.sh:3069: Successful get rc frontend {{.spec.replicas}}: 3
I0315 20:53:39.242] replicationcontroller/frontend scaled
I0315 20:53:39.333] test-cmd-util.sh:3073: Successful get rc frontend {{.spec.replicas}}: 2
I0315 20:53:39.414] replicationcontroller "frontend" deleted
W0315 20:53:39.515] I0315 20:53:38.450893   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683216-32534", Name:"frontend", UID:"67abc750-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1401", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-2p5b5
W0315 20:53:39.515] error: Expected replicas to be 3, was 2
W0315 20:53:39.516] I0315 20:53:38.972648   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683216-32534", Name:"frontend", UID:"67abc750-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1407", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-m72vd
W0315 20:53:39.516] I0315 20:53:39.246093   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683216-32534", Name:"frontend", UID:"67abc750-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1412", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-m72vd
W0315 20:53:39.565] I0315 20:53:39.564910   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683216-32534", Name:"redis-master", UID:"6902cd85-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1423", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-cmpfp
I0315 20:53:39.666] replicationcontroller/redis-master created
I0315 20:53:39.713] replicationcontroller/redis-slave created
I0315 20:53:39.810] replicationcontroller/redis-master scaled
... skipping 56 lines ...
I0315 20:53:43.058] service "frontend-4" deleted
I0315 20:53:43.064] service "frontend-5" deleted
W0315 20:53:43.165] I0315 20:53:41.677829   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683216-32534", Name:"frontend", UID:"6a4509cb-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1538", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xdpg9
W0315 20:53:43.166] I0315 20:53:41.679710   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683216-32534", Name:"frontend", UID:"6a4509cb-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1538", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fpqp7
W0315 20:53:43.166] I0315 20:53:41.680090   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1552683216-32534", Name:"frontend", UID:"6a4509cb-4764-11e9-8898-0242ac110002", APIVersion:"v1", ResourceVersion:"1538", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-f5rrf
I0315 20:53:43.267] Successful
I0315 20:53:43.267] message:error: cannot expose a { Node}
I0315 20:53:43.267] has:cannot expose
I0315 20:53:43.273] Successful
I0315 20:53:43.274] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0315 20:53:43.274] has:metadata.name: Invalid value
I0315 20:53:43.380] Successful
I0315 20:53:43.380] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0315 20:53:45.451] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0315 20:53:45.548] test-cmd-util.sh:3209: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0315 20:53:45.638] horizontalpodautoscaler.autoscaling "frontend" deleted
I0315 20:53:45.736] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0315 20:53:45.837] test-cmd-util.sh:3213: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0315 20:53:45.923] horizontalpodautoscaler.autoscaling "frontend" deleted
W0315 20:53:46.024] Error: required flag(s) "max" not set
W0315 20:53:46.024] 
W0315 20:53:46.024] 
W0315 20:53:46.024] Examples:
W0315 20:53:46.025]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0315 20:53:46.025]   kubectl autoscale deployment foo --min=2 --max=10
W0315 20:53:46.025]   
... skipping 70 lines ...
I0315 20:53:46.284]       dnsPolicy: ClusterFirst
I0315 20:53:46.284]       restartPolicy: Always
I0315 20:53:46.284]       schedulerName: default-scheduler
I0315 20:53:46.284]       securityContext: {}
I0315 20:53:46.285]       terminationGracePeriodSeconds: 0
I0315 20:53:46.285] status: {}
W0315 20:53:46.385] Error from server (NotFound): deployments.extensions "nginx-deployment-resources" not found
I0315 20:53:46.527] deployment.extensions/nginx-deployment-resources created
W0315 20:53:46.628] I0315 20:53:46.530255   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources", UID:"6d297146-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1650", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-57c6b5597b to 3
W0315 20:53:46.628] I0315 20:53:46.533407   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources-57c6b5597b", UID:"6d2a0132-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1651", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-57c6b5597b-b8r4b
W0315 20:53:46.629] I0315 20:53:46.535782   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources-57c6b5597b", UID:"6d2a0132-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1651", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-57c6b5597b-nfhxm
W0315 20:53:46.629] I0315 20:53:46.536192   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources-57c6b5597b", UID:"6d2a0132-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1651", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-57c6b5597b-dbggn
I0315 20:53:46.729] test-cmd-util.sh:3228: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 5 lines ...
I0315 20:53:47.346] deployment.extensions/nginx-deployment-resources resource requirements updated
W0315 20:53:47.447] I0315 20:53:46.942085   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources", UID:"6d297146-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1664", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-79bfbb6584 to 1
W0315 20:53:47.447] I0315 20:53:46.944949   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources-79bfbb6584", UID:"6d68d161-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1665", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79bfbb6584-8ldl4
W0315 20:53:47.448] I0315 20:53:46.947680   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources", UID:"6d297146-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1664", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-57c6b5597b to 2
W0315 20:53:47.448] I0315 20:53:46.953036   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources-57c6b5597b", UID:"6d2a0132-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1671", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-57c6b5597b-b8r4b
W0315 20:53:47.448] I0315 20:53:46.953497   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources", UID:"6d297146-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1667", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-79bfbb6584 to 2
W0315 20:53:47.449] E0315 20:53:46.954621   73057 replica_set.go:450] Sync "namespace-1552683216-32534/nginx-deployment-resources-79bfbb6584" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-resources-79bfbb6584": the object has been modified; please apply your changes to the latest version and try again
W0315 20:53:47.449] I0315 20:53:46.958388   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources-79bfbb6584", UID:"6d68d161-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1675", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79bfbb6584-8r76k
W0315 20:53:47.449] error: unable to find container named redis
W0315 20:53:47.449] I0315 20:53:47.355907   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources", UID:"6d297146-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1688", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-57c6b5597b to 0
W0315 20:53:47.450] I0315 20:53:47.361589   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources", UID:"6d297146-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1690", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-775fc4497d to 2
W0315 20:53:47.450] I0315 20:53:47.361657   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources-57c6b5597b", UID:"6d2a0132-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-57c6b5597b-nfhxm
W0315 20:53:47.450] I0315 20:53:47.365379   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources-775fc4497d", UID:"6da6ed98-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-775fc4497d-fnrp6
W0315 20:53:47.451] I0315 20:53:47.365716   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources-57c6b5597b", UID:"6d2a0132-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1692", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-57c6b5597b-dbggn
W0315 20:53:47.451] I0315 20:53:47.367624   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683216-32534", Name:"nginx-deployment-resources-775fc4497d", UID:"6da6ed98-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-775fc4497d-pbn77
... skipping 77 lines ...
I0315 20:53:48.085]     status: "False"
I0315 20:53:48.085]     type: Available
I0315 20:53:48.085]   observedGeneration: 4
I0315 20:53:48.085]   replicas: 4
I0315 20:53:48.085]   unavailableReplicas: 4
I0315 20:53:48.086]   updatedReplicas: 2
W0315 20:53:48.186] error: you must specify resources by --filename when --local is set.
W0315 20:53:48.186] Example resource specifications include:
W0315 20:53:48.186]    '-f rsrc.yaml'
W0315 20:53:48.186]    '--filename=rsrc.json'
I0315 20:53:48.287] test-cmd-util.sh:3249: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0315 20:53:48.350] test-cmd-util.sh:3250: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0315 20:53:48.447] test-cmd-util.sh:3251: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0315 20:53:49.967]                 pod-template-hash=1594316396
I0315 20:53:49.967] Annotations:    deployment.kubernetes.io/desired-replicas=1
I0315 20:53:49.967]                 deployment.kubernetes.io/max-replicas=2
I0315 20:53:49.967]                 deployment.kubernetes.io/revision=1
I0315 20:53:49.967] Controlled By:  Deployment/test-nginx-apps
I0315 20:53:49.967] Replicas:       1 current / 1 desired
I0315 20:53:49.968] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 20:53:49.968] Pod Template:
I0315 20:53:49.968]   Labels:  app=test-nginx-apps
I0315 20:53:49.968]            pod-template-hash=1594316396
I0315 20:53:49.968]   Containers:
I0315 20:53:49.968]    nginx:
I0315 20:53:49.968]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 82 lines ...
W0315 20:53:53.694] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0315 20:53:53.694] I0315 20:53:53.595342   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx", UID:"71022d5d-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1883", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6598b6dfdb to 1
W0315 20:53:53.695] I0315 20:53:53.597835   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-6598b6dfdb", UID:"71600d09-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1884", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6598b6dfdb-b5nvx
W0315 20:53:53.695] I0315 20:53:53.603346   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx", UID:"71022d5d-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1883", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-794c6b99b4 to 2
W0315 20:53:53.696] I0315 20:53:53.613030   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-794c6b99b4", UID:"7102bc4e-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1890", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-794c6b99b4-jbx86
W0315 20:53:53.696] I0315 20:53:53.616844   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx", UID:"71022d5d-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1886", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6598b6dfdb to 2
W0315 20:53:53.696] E0315 20:53:53.618382   73057 replica_set.go:450] Sync "namespace-1552683228-32713/nginx-6598b6dfdb" failed with Operation cannot be fulfilled on replicasets.apps "nginx-6598b6dfdb": the object has been modified; please apply your changes to the latest version and try again
W0315 20:53:53.697] I0315 20:53:53.620330   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-6598b6dfdb", UID:"71600d09-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1897", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6598b6dfdb-7dvg7
I0315 20:53:53.797] test-cmd-util.sh:3367: Successful get deployment.extensions {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0315 20:53:53.804]     Image:	k8s.gcr.io/nginx:test-cmd
I0315 20:53:53.899] test-cmd-util.sh:3370: Successful get deployment.extensions {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0315 20:53:54.034] deployment.extensions/nginx
W0315 20:53:54.135] I0315 20:53:53.992938   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx", UID:"71022d5d-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1907", FieldPath:""}): type: 'Normal' reason: 'DeploymentRollback' Rolled back deployment "nginx" to revision 1
... skipping 2 lines ...
I0315 20:53:55.337] test-cmd-util.sh:3377: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0315 20:53:55.538] deployment.extensions/nginx
W0315 20:53:55.639] I0315 20:53:55.233044   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx", UID:"71022d5d-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1914", FieldPath:""}): type: 'Warning' reason: 'DeploymentRollbackRevisionNotFound' Unable to find the revision to rollback to.
W0315 20:53:55.639] I0315 20:53:55.440979   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx", UID:"71022d5d-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1918", FieldPath:""}): type: 'Normal' reason: 'DeploymentRollback' Rolled back deployment "nginx" to revision 2
I0315 20:53:56.653] test-cmd-util.sh:3381: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0315 20:53:56.749] deployment.extensions/nginx paused
W0315 20:53:56.849] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0315 20:53:56.950] deployment.extensions/nginx resumed
W0315 20:53:57.059] I0315 20:53:57.058332   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx", UID:"71022d5d-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1929", FieldPath:""}): type: 'Normal' reason: 'DeploymentRollback' Rolled back deployment "nginx" to revision 3
I0315 20:53:57.159] deployment.extensions/nginx
I0315 20:53:57.353]     deployment.kubernetes.io/revision-history: 1,3
W0315 20:53:57.459] error: desired revision (3) is different from the running revision (5)
I0315 20:53:57.618] deployment.extensions/nginx2 created
W0315 20:53:57.719] I0315 20:53:57.622167   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx2", UID:"73c5d683-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1935", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-5d58d7d8d4 to 3
W0315 20:53:57.720] I0315 20:53:57.623918   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx2-5d58d7d8d4", UID:"73c672f9-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1936", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-5d58d7d8d4-xj7kv
W0315 20:53:57.720] I0315 20:53:57.626556   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx2-5d58d7d8d4", UID:"73c672f9-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1936", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-5d58d7d8d4-h86wz
W0315 20:53:57.720] I0315 20:53:57.626909   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx2-5d58d7d8d4", UID:"73c672f9-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1936", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-5d58d7d8d4-5l6xn
I0315 20:53:57.821] deployment.extensions "nginx2" deleted
... skipping 9 lines ...
I0315 20:53:58.410] test-cmd-util.sh:3408: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0315 20:53:58.503] deployment.extensions/nginx-deployment image updated
W0315 20:53:58.604] I0315 20:53:58.506466   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment", UID:"7410e6e0-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1984", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-78d7b4bff9 to 1
W0315 20:53:58.604] I0315 20:53:58.509654   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-78d7b4bff9", UID:"744d6fd0-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1985", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-78d7b4bff9-89vkn
W0315 20:53:58.605] I0315 20:53:58.512953   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment", UID:"7410e6e0-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1984", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-84765bf7f9 to 2
W0315 20:53:58.605] I0315 20:53:58.517270   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-84765bf7f9", UID:"74116911-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1991", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-84765bf7f9-gcq7q
W0315 20:53:58.605] E0315 20:53:58.517786   73057 replica_set.go:450] Sync "namespace-1552683228-32713/nginx-deployment-78d7b4bff9" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-78d7b4bff9": the object has been modified; please apply your changes to the latest version and try again
W0315 20:53:58.606] I0315 20:53:58.518035   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment", UID:"7410e6e0-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1986", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-78d7b4bff9 to 2
W0315 20:53:58.606] I0315 20:53:58.527529   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-78d7b4bff9", UID:"744d6fd0-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2001", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-78d7b4bff9-vc4df
I0315 20:53:58.706] test-cmd-util.sh:3411: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0315 20:53:58.707] test-cmd-util.sh:3412: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0315 20:53:58.894] deployment.extensions/nginx-deployment image updated
I0315 20:53:58.989] test-cmd-util.sh:3417: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 8 lines ...
I0315 20:53:59.919] test-cmd-util.sh:3430: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0315 20:54:00.102] test-cmd-util.sh:3433: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0315 20:54:00.191] test-cmd-util.sh:3434: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0315 20:54:00.275] deployment.extensions "nginx-deployment" deleted
I0315 20:54:00.372] test-cmd-util.sh:3440: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:54:00.517] deployment.extensions/nginx-deployment created
W0315 20:54:00.617] error: unable to find container named "redis"
W0315 20:54:00.618] I0315 20:53:59.731654   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment", UID:"7410e6e0-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2018", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-78d7b4bff9 to 0
W0315 20:54:00.618] I0315 20:53:59.735989   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-78d7b4bff9", UID:"744d6fd0-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-78d7b4bff9-89vkn
W0315 20:54:00.618] I0315 20:53:59.736106   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-78d7b4bff9", UID:"744d6fd0-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-78d7b4bff9-vc4df
W0315 20:54:00.619] I0315 20:53:59.737642   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment", UID:"7410e6e0-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2020", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-786f9d445c to 2
W0315 20:54:00.619] I0315 20:53:59.739733   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-786f9d445c", UID:"75076745-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2028", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-786f9d445c-vmcwc
W0315 20:54:00.619] I0315 20:53:59.742653   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-786f9d445c", UID:"75076745-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2028", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-786f9d445c-d4sfv
... skipping 8 lines ...
I0315 20:54:01.131] test-cmd-util.sh:3447: Successful get secret {{range.items}}{{.metadata.name}}:{{end}}: test-set-env-secret:
I0315 20:54:01.239] deployment.extensions/nginx-deployment env updated
W0315 20:54:01.339] I0315 20:54:01.243400   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment", UID:"758022a7-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2073", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-cdbc49cff to 1
W0315 20:54:01.340] I0315 20:54:01.246997   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-cdbc49cff", UID:"75eedfea-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2074", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-cdbc49cff-9jg7f
W0315 20:54:01.341] I0315 20:54:01.249646   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment", UID:"758022a7-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2073", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-84765bf7f9 to 2
W0315 20:54:01.341] I0315 20:54:01.254212   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-84765bf7f9", UID:"7580b39c-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2079", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-84765bf7f9-vw4qp
W0315 20:54:01.341] E0315 20:54:01.255522   73057 replica_set.go:450] Sync "namespace-1552683228-32713/nginx-deployment-cdbc49cff" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-cdbc49cff": the object has been modified; please apply your changes to the latest version and try again
W0315 20:54:01.342] I0315 20:54:01.255515   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment", UID:"758022a7-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2077", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-cdbc49cff to 2
W0315 20:54:01.342] I0315 20:54:01.259695   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-cdbc49cff", UID:"75eedfea-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2084", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-cdbc49cff-pn2ps
I0315 20:54:01.443] test-cmd-util.sh:3451: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2
I0315 20:54:01.446] test-cmd-util.sh:3453: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1
I0315 20:54:01.554] deployment.extensions/nginx-deployment env updated
W0315 20:54:01.655] I0315 20:54:01.564400   73057 event.go:221] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment", UID:"758022a7-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2097", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-84765bf7f9 to 0
... skipping 37 lines ...
I0315 20:54:03.092] test-cmd-util.sh:3492: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:54:03.184] test-cmd-util.sh:3496: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0315 20:54:03.322] replicaset.extensions/frontend created
I0315 20:54:03.413] Waiting for Get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}} : expected: php-redis:php-redis:php-redis:, got: 
I0315 20:54:03.497] Waiting for Get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}} : expected: php-redis:php-redis:php-redis:, got: 
W0315 20:54:03.597] I0315 20:54:02.230478   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-5fcdc7cb99", UID:"763e0910-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2157", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5fcdc7cb99-mjbh9
W0315 20:54:03.598] I0315 20:54:02.277487   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-5fcdc7cb99", UID:"763e0910-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2157", FieldPath:""}): type: 'Warning' reason: 'FailedDelete' Error deleting: pods "nginx-deployment-5fcdc7cb99-dbktm" not found
W0315 20:54:03.598] I0315 20:54:02.329223   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-67c9c8994", UID:"764d8b61-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2159", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-67c9c8994-m986q
W0315 20:54:03.598] I0315 20:54:02.579327   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683228-32713", Name:"nginx-deployment-67c9c8994", UID:"764d8b61-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2159", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-67c9c8994-2sxlj
W0315 20:54:03.598] E0315 20:54:02.627382   73057 replica_set.go:450] Sync "namespace-1552683228-32713/nginx-deployment-7b6cf544d6" failed with replicasets.apps "nginx-deployment-7b6cf544d6" not found
W0315 20:54:03.599] E0315 20:54:02.727816   73057 replica_set.go:450] Sync "namespace-1552683228-32713/nginx-deployment-f7b94bfb8" failed with replicasets.apps "nginx-deployment-f7b94bfb8" not found
W0315 20:54:03.599] E0315 20:54:02.777502   73057 replica_set.go:450] Sync "namespace-1552683228-32713/nginx-deployment-5fcdc7cb99" failed with replicasets.apps "nginx-deployment-5fcdc7cb99" not found
W0315 20:54:03.599] E0315 20:54:02.927411   73057 replica_set.go:450] Sync "namespace-1552683228-32713/nginx-deployment-67c9c8994" failed with replicasets.apps "nginx-deployment-67c9c8994" not found
W0315 20:54:03.599] I0315 20:54:03.030600   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683242-14833", Name:"frontend", UID:"76eca90b-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2194", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-w49tb
W0315 20:54:03.600] I0315 20:54:03.128644   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683242-14833", Name:"frontend", UID:"76eca90b-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2194", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-87gth
W0315 20:54:03.600] I0315 20:54:03.178540   73057 event.go:221] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1552683242-14833", Name:"frontend", UID:"76eca90b-4764-11e9-8898-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2194", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-j2h98
I0315 20:54:04.582] Waiting for Get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}} : expected: php-redis:php-redis:php-redis:, got: 
I0315 20:54:06.664] Waiting for Get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}} : expected: php-redis:php-redis:php-redis:, got: 
I0315 20:54:09.751] Waiting for Get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}} : expected: php-redis:php-redis:php-redis:, got: 
... skipping 2 lines ...
I0315 20:54:18.926] Waiting for Get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}} : expected: php-redis:php-redis:php-redis:, got: 
W0315 20:54:22.466] I0315 20:54:22.465481   73057 horizontal.go:366] Horizontal Pod Autoscaler has been deleted namespace-1552683228-32713/nginx-deployment
I0315 20:54:25.013] Waiting for Get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}} : expected: php-redis:php-redis:php-redis:, got: 
I0315 20:54:32.099] Waiting for Get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}} : expected: php-redis:php-redis:php-redis:, got: 
I0315 20:54:40.189] Waiting for Get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}} : expected: php-redis:php-redis:php-redis:, got: 
I0315 20:54:49.191] 
I0315 20:54:49.197] test-cmd-util.sh:3500: FAIL!
I0315 20:54:49.197] Get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}
I0315 20:54:49.197]   Expected: php-redis:php-redis:php-redis:
I0315 20:54:49.198]   Got:      
I0315 20:54:49.198] 
I0315 20:54:49.198] 61 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I0315 20:54:49.198] 
I0315 20:54:49.235] +++ exit code: 1
I0315 20:54:49.241] +++ error: 1
I0315 20:54:49.285] Error when running run_rs_tests
I0315 20:54:49.285] Recording: run_stateful_set_tests
I0315 20:54:49.285] Running command: run_stateful_set_tests
I0315 20:54:49.304] 
I0315 20:54:49.306] +++ Running case: test-cmd.run_stateful_set_tests 
I0315 20:54:49.308] +++ working dir: /go/src/k8s.io/kubernetes
I0315 20:54:49.310] +++ command: run_stateful_set_tests
... skipping 63 lines ...
I0315 20:54:52.217] test-cmd-util.sh:3750: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0315 20:54:52.309] test-cmd-util.sh:3751: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0315 20:54:52.411] statefulset.apps/nginx rolled back
I0315 20:54:52.512] test-cmd-util.sh:3754: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0315 20:54:52.607] test-cmd-util.sh:3755: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0315 20:54:52.708] Successful
I0315 20:54:52.708] message:error: unable to find specified revision 1000000 in history
I0315 20:54:52.708] has:unable to find specified revision
I0315 20:54:52.800] test-cmd-util.sh:3759: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0315 20:54:52.892] test-cmd-util.sh:3760: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0315 20:54:52.991] statefulset.apps/nginx rolled back
I0315 20:54:53.087] test-cmd-util.sh:3763: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0315 20:54:53.180] test-cmd-util.sh:3764: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0315 20:54:55.004] Name:         mock
I0315 20:54:55.004] Namespace:    namespace-1552683294-27353
I0315 20:54:55.004] Selector:     app=mock
I0315 20:54:55.004] Labels:       app=mock
I0315 20:54:55.004] Annotations:  <none>
I0315 20:54:55.004] Replicas:     1 current / 1 desired
I0315 20:54:55.004] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 20:54:55.004] Pod Template:
I0315 20:54:55.004]   Labels:  app=mock
I0315 20:54:55.004]   Containers:
I0315 20:54:55.005]    mock-container:
I0315 20:54:55.005]     Image:        k8s.gcr.io/pause:2.0
I0315 20:54:55.005]     Port:         9949/TCP
... skipping 56 lines ...
I0315 20:54:57.166] Name:         mock
I0315 20:54:57.166] Namespace:    namespace-1552683294-27353
I0315 20:54:57.166] Selector:     app=mock
I0315 20:54:57.166] Labels:       app=mock
I0315 20:54:57.166] Annotations:  <none>
I0315 20:54:57.166] Replicas:     1 current / 1 desired
I0315 20:54:57.166] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 20:54:57.166] Pod Template:
I0315 20:54:57.166]   Labels:  app=mock
I0315 20:54:57.166]   Containers:
I0315 20:54:57.166]    mock-container:
I0315 20:54:57.166]     Image:        k8s.gcr.io/pause:2.0
I0315 20:54:57.166]     Port:         9949/TCP
... skipping 56 lines ...
I0315 20:54:59.364] Name:         mock
I0315 20:54:59.364] Namespace:    namespace-1552683294-27353
I0315 20:54:59.364] Selector:     app=mock
I0315 20:54:59.364] Labels:       app=mock
I0315 20:54:59.364] Annotations:  <none>
I0315 20:54:59.364] Replicas:     1 current / 1 desired
I0315 20:54:59.364] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 20:54:59.364] Pod Template:
I0315 20:54:59.364]   Labels:  app=mock
I0315 20:54:59.364]   Containers:
I0315 20:54:59.364]    mock-container:
I0315 20:54:59.364]     Image:        k8s.gcr.io/pause:2.0
I0315 20:54:59.365]     Port:         9949/TCP
... skipping 42 lines ...
I0315 20:55:01.490] Namespace:    namespace-1552683294-27353
I0315 20:55:01.490] Selector:     app=mock
I0315 20:55:01.491] Labels:       app=mock
I0315 20:55:01.491]               status=replaced
I0315 20:55:01.491] Annotations:  <none>
I0315 20:55:01.491] Replicas:     1 current / 1 desired
I0315 20:55:01.491] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 20:55:01.491] Pod Template:
I0315 20:55:01.491]   Labels:  app=mock
I0315 20:55:01.491]   Containers:
I0315 20:55:01.491]    mock-container:
I0315 20:55:01.491]     Image:        k8s.gcr.io/pause:2.0
I0315 20:55:01.491]     Port:         9949/TCP
... skipping 11 lines ...
I0315 20:55:01.496] Namespace:    namespace-1552683294-27353
I0315 20:55:01.497] Selector:     app=mock2
I0315 20:55:01.497] Labels:       app=mock2
I0315 20:55:01.497]               status=replaced
I0315 20:55:01.497] Annotations:  <none>
I0315 20:55:01.497] Replicas:     1 current / 1 desired
I0315 20:55:01.497] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0315 20:55:01.497] Pod Template:
I0315 20:55:01.497]   Labels:  app=mock2
I0315 20:55:01.497]   Containers:
I0315 20:55:01.498]    mock-container:
I0315 20:55:01.498]     Image:        k8s.gcr.io/pause:2.0
I0315 20:55:01.498]     Port:         9949/TCP
... skipping 580 lines ...
I0315 20:55:11.138] yes
I0315 20:55:11.139] has:the server doesn't have a resource type
I0315 20:55:11.213] Successful
I0315 20:55:11.214] message:yes
I0315 20:55:11.214] has:yes
I0315 20:55:11.295] Successful
I0315 20:55:11.295] message:error: --subresource can not be used with NonResourceURL
I0315 20:55:11.296] has:subresource can not be used with NonResourceURL
I0315 20:55:11.374] Successful
I0315 20:55:11.456] Successful
I0315 20:55:11.456] message:yes
I0315 20:55:11.457] 0
I0315 20:55:11.457] has:0
... skipping 821 lines ...
I0315 20:55:38.008] message:node/127.0.0.1 already uncordoned (dry run)
I0315 20:55:38.008] has:already uncordoned
I0315 20:55:38.099] test-cmd-util.sh:4971: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0315 20:55:38.179] node/127.0.0.1 labeled
I0315 20:55:38.279] test-cmd-util.sh:4976: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0315 20:55:38.352] Successful
I0315 20:55:38.352] message:error: cannot specify both a node name and a --selector option
I0315 20:55:38.352] See 'kubectl drain -h' for help and examples.
I0315 20:55:38.352] has:cannot specify both a node name
I0315 20:55:38.424] Successful
I0315 20:55:38.425] message:error: USAGE: cordon NODE [flags]
I0315 20:55:38.425] See 'kubectl cordon -h' for help and examples.
I0315 20:55:38.425] has:error\: USAGE\: cordon NODE
I0315 20:55:38.505] node/127.0.0.1 already uncordoned
I0315 20:55:38.585] Successful
I0315 20:55:38.585] message:error: You must provide one or more resources by argument or filename.
I0315 20:55:38.585] Example resource specifications include:
I0315 20:55:38.585]    '-f rsrc.yaml'
I0315 20:55:38.585]    '--filename=rsrc.json'
I0315 20:55:38.586]    '<resource> <name>'
I0315 20:55:38.586]    '<resource>'
I0315 20:55:38.586] has:must provide one or more resources
... skipping 77 lines ...
I0315 20:55:39.042]   kubectl [flags] [options]
I0315 20:55:39.043] 
I0315 20:55:39.043] Use "kubectl <command> --help" for more information about a given command.
I0315 20:55:39.043] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0315 20:55:39.043] has:plugin\s\+Runs a command-line plugin
I0315 20:55:39.106] Successful
I0315 20:55:39.107] message:error: no plugins installed.
I0315 20:55:39.107] has:no plugins installed
I0315 20:55:39.180] Successful
I0315 20:55:39.180] message:Runs a command-line plugin. 
I0315 20:55:39.180] 
I0315 20:55:39.180] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0315 20:55:39.181] 
I0315 20:55:39.181] Available Commands:
I0315 20:55:39.181]   echo        Echoes for test-cmd
I0315 20:55:39.181]   env         The plugin envs plugin
I0315 20:55:39.181]   error       The tremendous plugin that always fails!
I0315 20:55:39.181]   get         The wonderful new plugin-based get!
I0315 20:55:39.181]   tree        Plugin with a tree of commands
I0315 20:55:39.181] 
I0315 20:55:39.181] Usage:
I0315 20:55:39.181]   kubectl plugin NAME [options]
I0315 20:55:39.181] 
... skipping 5 lines ...
I0315 20:55:39.182] 
I0315 20:55:39.183] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0315 20:55:39.183] 
I0315 20:55:39.183] Available Commands:
I0315 20:55:39.183]   echo        Echoes for test-cmd
I0315 20:55:39.183]   env         The plugin envs plugin
I0315 20:55:39.183]   error       The tremendous plugin that always fails!
I0315 20:55:39.183]   get         The wonderful new plugin-based get!
I0315 20:55:39.183]   tree        Plugin with a tree of commands
I0315 20:55:39.183] 
I0315 20:55:39.183] Usage:
I0315 20:55:39.183]   kubectl plugin NAME [options]
I0315 20:55:39.184] 
... skipping 5 lines ...
I0315 20:55:39.185] 
I0315 20:55:39.185] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0315 20:55:39.185] 
I0315 20:55:39.185] Available Commands:
I0315 20:55:39.185]   echo        Echoes for test-cmd
I0315 20:55:39.185]   env         The plugin envs plugin
I0315 20:55:39.186]   error       The tremendous plugin that always fails!
I0315 20:55:39.186]   get         The wonderful new plugin-based get!
I0315 20:55:39.186]   tree        Plugin with a tree of commands
I0315 20:55:39.186] 
I0315 20:55:39.186] Usage:
I0315 20:55:39.186]   kubectl plugin NAME [options]
I0315 20:55:39.186] 
I0315 20:55:39.186] Use "kubectl <command> --help" for more information about a given command.
I0315 20:55:39.187] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0315 20:55:39.187] has:error\s\+The tremendous plugin that always fails!
I0315 20:55:39.187] Successful
I0315 20:55:39.187] message:Runs a command-line plugin. 
I0315 20:55:39.187] 
I0315 20:55:39.187] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0315 20:55:39.187] 
I0315 20:55:39.187] Available Commands:
I0315 20:55:39.187]   echo        Echoes for test-cmd
I0315 20:55:39.188]   env         The plugin envs plugin
I0315 20:55:39.188]   error       The tremendous plugin that always fails!
I0315 20:55:39.188]   get         The wonderful new plugin-based get!
I0315 20:55:39.188]   tree        Plugin with a tree of commands
I0315 20:55:39.188] 
I0315 20:55:39.188] Usage:
I0315 20:55:39.188]   kubectl plugin NAME [options]
I0315 20:55:39.188] 
... skipping 5 lines ...
I0315 20:55:39.189] 
I0315 20:55:39.189] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0315 20:55:39.189] 
I0315 20:55:39.190] Available Commands:
I0315 20:55:39.190]   echo        Echoes for test-cmd
I0315 20:55:39.190]   env         The plugin envs plugin
I0315 20:55:39.190]   error       The tremendous plugin that always fails!
I0315 20:55:39.190]   get         The wonderful new plugin-based get!
I0315 20:55:39.190]   tree        Plugin with a tree of commands
I0315 20:55:39.190] 
I0315 20:55:39.190] Usage:
I0315 20:55:39.190]   kubectl plugin NAME [options]
I0315 20:55:39.191] 
... skipping 5 lines ...
I0315 20:55:39.191] 
I0315 20:55:39.192] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0315 20:55:39.192] 
I0315 20:55:39.192] Available Commands:
I0315 20:55:39.192]   echo        Echoes for test-cmd
I0315 20:55:39.192]   env         The plugin envs plugin
I0315 20:55:39.192]   error       The tremendous plugin that always fails!
I0315 20:55:39.192]   get         The wonderful new plugin-based get!
I0315 20:55:39.192]   tree        Plugin with a tree of commands
I0315 20:55:39.192] 
I0315 20:55:39.192] Usage:
I0315 20:55:39.193]   kubectl plugin NAME [options]
I0315 20:55:39.193] 
... skipping 5 lines ...
I0315 20:55:39.268] 
I0315 20:55:39.268] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0315 20:55:39.268] 
I0315 20:55:39.268] Available Commands:
I0315 20:55:39.268]   echo        Echoes for test-cmd
I0315 20:55:39.268]   env         The plugin envs plugin
I0315 20:55:39.268]   error       The tremendous plugin that always fails!
I0315 20:55:39.268]   get         The wonderful new plugin-based get!
I0315 20:55:39.268]   hello       The hello plugin
I0315 20:55:39.269]   tree        Plugin with a tree of commands
I0315 20:55:39.269] 
I0315 20:55:39.269] Usage:
I0315 20:55:39.269]   kubectl plugin NAME [options]
... skipping 6 lines ...
I0315 20:55:39.270] 
I0315 20:55:39.270] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0315 20:55:39.270] 
I0315 20:55:39.270] Available Commands:
I0315 20:55:39.270]   echo        Echoes for test-cmd
I0315 20:55:39.270]   env         The plugin envs plugin
I0315 20:55:39.270]   error       The tremendous plugin that always fails!
I0315 20:55:39.270]   get         The wonderful new plugin-based get!
I0315 20:55:39.270]   hello       The hello plugin
I0315 20:55:39.271]   tree        Plugin with a tree of commands
I0315 20:55:39.271] 
I0315 20:55:39.271] Usage:
I0315 20:55:39.271]   kubectl plugin NAME [options]
... skipping 6 lines ...
I0315 20:55:39.272] 
I0315 20:55:39.272] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0315 20:55:39.272] 
I0315 20:55:39.272] Available Commands:
I0315 20:55:39.272]   echo        Echoes for test-cmd
I0315 20:55:39.273]   env         The plugin envs plugin
I0315 20:55:39.273]   error       The tremendous plugin that always fails!
I0315 20:55:39.273]   get         The wonderful new plugin-based get!
I0315 20:55:39.273]   hello       The hello plugin
I0315 20:55:39.273]   tree        Plugin with a tree of commands
I0315 20:55:39.273] 
I0315 20:55:39.273] Usage:
I0315 20:55:39.273]   kubectl plugin NAME [options]
I0315 20:55:39.273] 
I0315 20:55:39.273] Use "kubectl <command> --help" for more information about a given command.
I0315 20:55:39.274] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0315 20:55:39.274] has:error\s\+The tremendous plugin that always fails!
I0315 20:55:39.274] Successful
I0315 20:55:39.274] message:Runs a command-line plugin. 
I0315 20:55:39.274] 
I0315 20:55:39.274] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0315 20:55:39.274] 
I0315 20:55:39.274] Available Commands:
I0315 20:55:39.274]   echo        Echoes for test-cmd
I0315 20:55:39.274]   env         The plugin envs plugin
I0315 20:55:39.275]   error       The tremendous plugin that always fails!
I0315 20:55:39.275]   get         The wonderful new plugin-based get!
I0315 20:55:39.275]   hello       The hello plugin
I0315 20:55:39.275]   tree        Plugin with a tree of commands
I0315 20:55:39.275] 
I0315 20:55:39.275] Usage:
I0315 20:55:39.275]   kubectl plugin NAME [options]
... skipping 6 lines ...
I0315 20:55:39.276] 
I0315 20:55:39.276] Plugins are subcommands that are not part of the major command-line distribution and can even be provided by third-parties. Please refer to the documentation and examples for more information about how to install and write your own plugins.
I0315 20:55:39.276] 
I0315 20:55:39.276] Available Commands:
I0315 20:55:39.276]   echo        Echoes for test-cmd
I0315 20:55:39.276]   env         The plugin envs plugin
I0315 20:55:39.277]   error       The tremendous plugin that always fails!
I0315 20:55:39.277]   get         The wonderful new plugin-based get!
I0315 20:55:39.277]   hello       The hello plugin
I0315 20:55:39.277]   tree        Plugin with a tree of commands
I0315 20:55:39.277] 
I0315 20:55:39.277] Usage:
I0315 20:55:39.277]   kubectl plugin NAME [options]
... skipping 159 lines ...
I0315 20:55:39.505] #######
I0315 20:55:39.505] has:#hello#
I0315 20:55:39.578] Successful
I0315 20:55:39.578] message:This plugin works!
I0315 20:55:39.578] has:This plugin works!
I0315 20:55:39.650] Successful
I0315 20:55:39.651] message:error: unknown command "hello"
I0315 20:55:39.651] See 'kubectl plugin -h' for help and examples.
I0315 20:55:39.651] has:unknown command
I0315 20:55:39.725] Successful
I0315 20:55:39.726] message:error: exit status 1
I0315 20:55:39.726] has:error: exit status 1
I0315 20:55:39.796] Successful
I0315 20:55:39.797] message:Plugin with a tree of commands
I0315 20:55:39.797] 
I0315 20:55:39.797] Available Commands:
I0315 20:55:39.797]   child1      The first child of a tree
I0315 20:55:39.797]   child2      The second child of a tree
... skipping 467 lines ...
I0315 20:55:40.203] 
I0315 20:55:40.205] +++ Running case: test-cmd.run_impersonation_tests 
I0315 20:55:40.207] +++ working dir: /go/src/k8s.io/kubernetes
I0315 20:55:40.210] +++ command: run_impersonation_tests
I0315 20:55:40.219] +++ [0315 20:55:40] Testing impersonation
I0315 20:55:40.293] Successful
I0315 20:55:40.293] message:error: requesting groups or user-extra for  without impersonating a user
I0315 20:55:40.293] has:without impersonating a user
I0315 20:55:40.454] certificatesigningrequest.certificates.k8s.io/foo created
I0315 20:55:40.555] test-cmd-util.sh:5101: Successful get csr/foo {{.spec.username}}: user1
I0315 20:55:40.654] test-cmd-util.sh:5102: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0315 20:55:40.739] certificatesigningrequest.certificates.k8s.io "foo" deleted
I0315 20:55:40.903] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 14 lines ...
W0315 20:55:41.450] I0315 20:55:41.446734   68826 available_controller.go:290] Shutting down AvailableConditionController
W0315 20:55:41.450] I0315 20:55:41.446745   68826 autoregister_controller.go:160] Shutting down autoregister controller
W0315 20:55:41.451] I0315 20:55:41.446843   68826 apiservice_controller.go:102] Shutting down APIServiceRegistrationController
W0315 20:55:41.451] I0315 20:55:41.446853   68826 crdregistration_controller.go:143] Shutting down crd-autoregister controller
I0315 20:55:41.551] No resources found
I0315 20:55:41.551] pod "test-pod-1" force deleted
I0315 20:55:41.551] FAILED TESTS: run_rs_tests, 
I0315 20:55:48.504] junit report dir: /workspace/artifacts
I0315 20:55:48.524] +++ [0315 20:55:48] Clean up complete
I0315 20:55:48.526] Makefile:294: recipe for target 'test-cmd' failed
W0315 20:55:48.626] make: *** [test-cmd] Error 1
W0315 20:55:49.763] Traceback (most recent call last):
W0315 20:55:49.764]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0315 20:55:49.764]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0315 20:55:49.764]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0315 20:55:49.764]     check(*cmd)
W0315 20:55:49.764]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0315 20:55:49.765]     subprocess.check_call(cmd)
W0315 20:55:49.765]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0315 20:55:49.775]     raise CalledProcessError(retcode, cmd)
W0315 20:55:49.776] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=release-1.11', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.11-v20190315-a2a5ddb38', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0315 20:55:49.781] Command failed
I0315 20:55:49.781] process 718 exited with code 1 after 13.5m
E0315 20:55:49.781] FAIL: pull-kubernetes-integration
I0315 20:55:49.782] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0315 20:55:50.362] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0315 20:55:50.404] process 109707 exited with code 0 after 0.0m
I0315 20:55:50.404] Call:  gcloud config get-value account
I0315 20:55:50.671] process 109719 exited with code 0 after 0.0m
I0315 20:55:50.671] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0315 20:55:50.671] Upload result and artifacts...
I0315 20:55:50.672] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/74996/pull-kubernetes-integration/48842
I0315 20:55:50.672] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/74996/pull-kubernetes-integration/48842/artifacts
W0315 20:55:51.707] CommandException: One or more URLs matched no objects.
E0315 20:55:51.834] Command failed
I0315 20:55:51.834] process 109731 exited with code 1 after 0.0m
W0315 20:55:51.834] Remote dir gs://kubernetes-jenkins/pr-logs/pull/74996/pull-kubernetes-integration/48842/artifacts not exist yet
I0315 20:55:51.835] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/74996/pull-kubernetes-integration/48842/artifacts
I0315 20:55:53.920] process 109874 exited with code 0 after 0.0m
W0315 20:55:53.921] metadata path /workspace/_artifacts/metadata.json does not exist
W0315 20:55:53.921] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...