This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmtaufen: upload Windows startup scripts to GCS for CI
ResultFAILURE
Tests 1 failed / 58 succeeded
Started2019-02-25 22:13
Elapsed18m15s
Revision
Buildergke-prow-containerd-pool-99179761-s4k7
Refs master:2aacb773
73650:9b3926c3
pod661d568e-394a-11e9-bec6-0a580a6c1016
infra-commitf70ee9e84
pod661d568e-394a-11e9-bec6-0a580a6c1016
repok8s.io/kubernetes
repo-commit54af3a65e2ba4ff9272a52c1f5316a11945d81e5
repos{u'k8s.io/kubernetes': u'master:2aacb773746b9888a43cebbae2173fb6607abdc8,73650:9b3926c3d3fe70aba07a12d8ccff623e7b84c6ad'}

Test Failures


test-cmd run_kubectl_run_tests 0.94s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=test\-cmd\srun\_kubectl\_run\_tests$'
kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
!!! [0225 22:26:50] Call tree:
!!! [0225 22:26:50]  1: /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/run.sh:39 kube::test::get_object_assert(...)
!!! [0225 22:26:50]  2: /go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 run_kubectl_run_tests(...)
!!! [0225 22:26:50]  3: /go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0225 22:26:50]  4: /go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:135 juLog(...)
!!! [0225 22:26:50]  5: /go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:483 record_command(...)
!!! [0225 22:26:50]  6: hack/make-rules/test-cmd.sh:109 runTests(...)
				
				Click to see stdout/stderrfrom junit_test-cmd.xml

Filter through log files | View test history on testgrid


Show 58 Passed Tests

Error lines from build-log.txt

... skipping 314 lines ...
W0225 22:25:08.804] I0225 22:25:08.802843   44054 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0225 22:25:08.805] I0225 22:25:08.803522   44054 server.go:561] external host was not specified, using 172.17.0.2
W0225 22:25:08.805] W0225 22:25:08.803542   44054 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0225 22:25:08.805] I0225 22:25:08.803975   44054 server.go:147] Version: v1.15.0-alpha.0.358+54af3a65e2ba4f
W0225 22:25:09.327] I0225 22:25:09.326544   44054 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0225 22:25:09.327] I0225 22:25:09.326585   44054 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0225 22:25:09.328] E0225 22:25:09.327377   44054 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:25:09.328] E0225 22:25:09.327424   44054 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:25:09.328] E0225 22:25:09.327472   44054 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:25:09.328] E0225 22:25:09.327545   44054 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:25:09.329] E0225 22:25:09.327585   44054 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:25:09.329] E0225 22:25:09.327624   44054 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:25:09.329] I0225 22:25:09.327646   44054 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0225 22:25:09.330] I0225 22:25:09.327663   44054 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0225 22:25:09.330] I0225 22:25:09.330082   44054 clientconn.go:551] parsed scheme: ""
W0225 22:25:09.331] I0225 22:25:09.330123   44054 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0225 22:25:09.331] I0225 22:25:09.330208   44054 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0225 22:25:09.331] I0225 22:25:09.330321   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 345 lines ...
W0225 22:25:09.976] W0225 22:25:09.975647   44054 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0225 22:25:10.326] I0225 22:25:10.326048   44054 clientconn.go:551] parsed scheme: ""
W0225 22:25:10.327] I0225 22:25:10.326097   44054 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0225 22:25:10.327] I0225 22:25:10.326151   44054 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0225 22:25:10.327] I0225 22:25:10.326204   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:25:10.328] I0225 22:25:10.328193   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:25:10.856] E0225 22:25:10.855324   44054 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:25:10.856] E0225 22:25:10.855402   44054 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:25:10.856] E0225 22:25:10.855438   44054 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:25:10.856] E0225 22:25:10.855509   44054 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:25:10.856] E0225 22:25:10.855545   44054 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:25:10.857] E0225 22:25:10.855569   44054 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:25:10.857] I0225 22:25:10.855592   44054 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0225 22:25:10.857] I0225 22:25:10.855614   44054 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0225 22:25:10.858] I0225 22:25:10.857060   44054 clientconn.go:551] parsed scheme: ""
W0225 22:25:10.858] I0225 22:25:10.857082   44054 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0225 22:25:10.858] I0225 22:25:10.857121   44054 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0225 22:25:10.858] I0225 22:25:10.857560   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 179 lines ...
W0225 22:25:49.252] I0225 22:25:49.248549   47432 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.apps
W0225 22:25:49.253] I0225 22:25:49.248612   47432 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
W0225 22:25:49.253] I0225 22:25:49.248712   47432 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
W0225 22:25:49.253] I0225 22:25:49.248782   47432 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0225 22:25:49.253] I0225 22:25:49.248831   47432 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for endpoints
W0225 22:25:49.253] I0225 22:25:49.248867   47432 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
W0225 22:25:49.254] E0225 22:25:49.248919   47432 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0225 22:25:49.254] I0225 22:25:49.248948   47432 controllermanager.go:497] Started "resourcequota"
W0225 22:25:49.254] W0225 22:25:49.248962   47432 controllermanager.go:476] "bootstrapsigner" is disabled
W0225 22:25:49.254] I0225 22:25:49.249143   47432 resource_quota_controller.go:276] Starting resource quota controller
W0225 22:25:49.254] I0225 22:25:49.249224   47432 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0225 22:25:49.255] I0225 22:25:49.249315   47432 resource_quota_monitor.go:301] QuotaMonitor running
W0225 22:25:49.255] I0225 22:25:49.249464   47432 node_lifecycle_controller.go:77] Sending events to api server
W0225 22:25:49.255] E0225 22:25:49.249547   47432 core.go:162] failed to start cloud node lifecycle controller: no cloud provider provided
W0225 22:25:49.255] W0225 22:25:49.249568   47432 controllermanager.go:489] Skipping "cloud-node-lifecycle"
W0225 22:25:49.255] I0225 22:25:49.250466   47432 controllermanager.go:497] Started "persistentvolume-binder"
W0225 22:25:49.256] I0225 22:25:49.250651   47432 pv_controller_base.go:271] Starting persistent volume controller
W0225 22:25:49.256] I0225 22:25:49.250694   47432 controller_utils.go:1021] Waiting for caches to sync for persistent volume controller
W0225 22:25:49.256] I0225 22:25:49.250985   47432 controllermanager.go:497] Started "clusterrole-aggregation"
W0225 22:25:49.256] I0225 22:25:49.251089   47432 clusterroleaggregation_controller.go:148] Starting ClusterRoleAggregator
... skipping 16 lines ...
W0225 22:25:49.606] I0225 22:25:49.562320   47432 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0225 22:25:49.606] I0225 22:25:49.562301   47432 controllermanager.go:497] Started "garbagecollector"
W0225 22:25:49.606] I0225 22:25:49.562344   47432 graph_builder.go:308] GraphBuilder running
W0225 22:25:49.606] I0225 22:25:49.563057   47432 controllermanager.go:497] Started "job"
W0225 22:25:49.607] I0225 22:25:49.563513   47432 job_controller.go:143] Starting job controller
W0225 22:25:49.607] I0225 22:25:49.563537   47432 controller_utils.go:1021] Waiting for caches to sync for job controller
W0225 22:25:49.607] E0225 22:25:49.563694   47432 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0225 22:25:49.607] W0225 22:25:49.563720   47432 controllermanager.go:489] Skipping "service"
W0225 22:25:49.608] W0225 22:25:49.564333   47432 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
W0225 22:25:49.608] I0225 22:25:49.564975   47432 controllermanager.go:497] Started "attachdetach"
W0225 22:25:49.608] I0225 22:25:49.565612   47432 controllermanager.go:497] Started "ttl"
W0225 22:25:49.608] W0225 22:25:49.565636   47432 controllermanager.go:476] "tokencleaner" is disabled
W0225 22:25:49.608] W0225 22:25:49.565647   47432 controllermanager.go:489] Skipping "root-ca-cert-publisher"
... skipping 55 lines ...
W0225 22:25:49.682] I0225 22:25:49.681813   47432 taint_manager.go:198] Starting NoExecuteTaintManager
W0225 22:25:49.696] I0225 22:25:49.695368   47432 controller_utils.go:1028] Caches are synced for expand controller
W0225 22:25:49.751] I0225 22:25:49.750903   47432 controller_utils.go:1028] Caches are synced for persistent volume controller
W0225 22:25:49.766] I0225 22:25:49.766136   47432 controller_utils.go:1028] Caches are synced for attach detach controller
W0225 22:25:49.767] I0225 22:25:49.766793   47432 controller_utils.go:1028] Caches are synced for ReplicaSet controller
W0225 22:25:49.767] I0225 22:25:49.767374   47432 controller_utils.go:1028] Caches are synced for PV protection controller
W0225 22:25:49.773] W0225 22:25:49.772756   47432 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0225 22:25:49.873] node/127.0.0.1 created
I0225 22:25:49.874] +++ [0225 22:25:49] Checking kubectl version
I0225 22:25:49.874] Client Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.358+54af3a65e2ba4f", GitCommit:"54af3a65e2ba4ff9272a52c1f5316a11945d81e5", GitTreeState:"clean", BuildDate:"2019-02-25T22:23:25Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
I0225 22:25:49.875] Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.358+54af3a65e2ba4f", GitCommit:"54af3a65e2ba4ff9272a52c1f5316a11945d81e5", GitTreeState:"clean", BuildDate:"2019-02-25T22:24:04Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
W0225 22:25:49.975] I0225 22:25:49.952532   47432 controller_utils.go:1028] Caches are synced for service account controller
W0225 22:25:49.975] I0225 22:25:49.955890   44054 controller.go:606] quota admission added evaluator for: serviceaccounts
... skipping 25 lines ...
I0225 22:25:50.467]   "compiler": "gc",
I0225 22:25:50.467]   "platform": "linux/amd64"
I0225 22:25:50.594] }+++ [0225 22:25:50] Testing kubectl version: check client only output matches expected output
I0225 22:25:50.746] Successful: the flag '--client' shows correct client info
I0225 22:25:50.753] (BSuccessful: the flag '--client' correctly has no server version info
I0225 22:25:50.757] (B+++ [0225 22:25:50] Testing kubectl version: verify json output
W0225 22:25:50.898] E0225 22:25:50.898266   47432 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0225 22:25:50.999] Successful: --output json has correct client info
I0225 22:25:50.999] (BSuccessful: --output json has correct server info
I0225 22:25:51.000] (B+++ [0225 22:25:50] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I0225 22:25:51.077] Successful: --client --output json has correct client info
I0225 22:25:51.084] (BSuccessful: --client --output json has no server info
I0225 22:25:51.087] (B+++ [0225 22:25:51] Testing kubectl version: compare json output using additional --short flag
... skipping 50 lines ...
I0225 22:25:54.197] +++ working dir: /go/src/k8s.io/kubernetes
I0225 22:25:54.200] +++ command: run_RESTMapper_evaluation_tests
I0225 22:25:54.211] +++ [0225 22:25:54] Creating namespace namespace-1551133554-4918
I0225 22:25:54.292] namespace/namespace-1551133554-4918 created
I0225 22:25:54.371] Context "test" modified.
I0225 22:25:54.378] +++ [0225 22:25:54] Testing RESTMapper
I0225 22:25:54.497] +++ [0225 22:25:54] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0225 22:25:54.513] +++ exit code: 0
I0225 22:25:54.653] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0225 22:25:54.654] bindings                                                                      true         Binding
I0225 22:25:54.654] componentstatuses                 cs                                          false        ComponentStatus
I0225 22:25:54.654] configmaps                        cm                                          true         ConfigMap
I0225 22:25:54.654] endpoints                         ep                                          true         Endpoints
... skipping 638 lines ...
I0225 22:26:14.605] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:26:14.780] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:26:14.883] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:26:15.056] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:26:15.150] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:26:15.243] (Bpod "valid-pod" force deleted
W0225 22:26:15.344] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0225 22:26:15.344] error: setting 'all' parameter but found a non empty selector. 
W0225 22:26:15.344] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0225 22:26:15.445] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0225 22:26:15.448] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0225 22:26:15.537] (Bnamespace/test-kubectl-describe-pod created
I0225 22:26:15.647] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0225 22:26:15.743] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0225 22:26:16.768] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0225 22:26:16.871] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0225 22:26:16.951] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0225 22:26:17.050] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0225 22:26:17.237] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:26:17.464] (Bpod/env-test-pod created
W0225 22:26:17.565] error: min-available and max-unavailable cannot be both specified
I0225 22:26:17.670] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0225 22:26:17.670] Name:               env-test-pod
I0225 22:26:17.671] Namespace:          test-kubectl-describe-pod
I0225 22:26:17.671] Priority:           0
I0225 22:26:17.671] PriorityClassName:  <none>
I0225 22:26:17.671] Node:               <none>
... skipping 145 lines ...
I0225 22:26:30.780] (Bservice "modified" deleted
I0225 22:26:30.876] replicationcontroller "modified" deleted
I0225 22:26:31.196] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:26:31.382] (Bpod/valid-pod created
I0225 22:26:31.505] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:26:31.686] (BSuccessful
I0225 22:26:31.686] message:Error from server: cannot restore map from string
I0225 22:26:31.687] has:cannot restore map from string
W0225 22:26:31.788] E0225 22:26:31.674671   44054 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0225 22:26:31.888] Successful
I0225 22:26:31.889] message:pod/valid-pod patched (no change)
I0225 22:26:31.889] has:patched (no change)
I0225 22:26:31.891] pod/valid-pod patched
I0225 22:26:32.010] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0225 22:26:32.115] (Bcore.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
... skipping 4 lines ...
I0225 22:26:32.629] (Bpod/valid-pod patched
I0225 22:26:32.742] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0225 22:26:32.832] (Bpod/valid-pod patched
I0225 22:26:32.942] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0225 22:26:33.140] (Bpod/valid-pod patched
I0225 22:26:33.266] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0225 22:26:33.468] (B+++ [0225 22:26:33] "kubectl patch with resourceVersion 500" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0225 22:26:33.758] pod "valid-pod" deleted
I0225 22:26:33.776] pod/valid-pod replaced
I0225 22:26:33.898] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0225 22:26:34.088] (BSuccessful
I0225 22:26:34.089] message:error: --grace-period must have --force specified
I0225 22:26:34.089] has:\-\-grace-period must have \-\-force specified
I0225 22:26:34.283] Successful
I0225 22:26:34.283] message:error: --timeout must have --force specified
I0225 22:26:34.284] has:\-\-timeout must have \-\-force specified
I0225 22:26:34.465] node/node-v1-test created
W0225 22:26:34.566] W0225 22:26:34.465319   47432 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0225 22:26:34.667] node/node-v1-test replaced
I0225 22:26:34.771] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0225 22:26:34.865] (Bnode "node-v1-test" deleted
W0225 22:26:34.965] I0225 22:26:34.686154   47432 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"6879c1f4-394c-11e9-bf9a-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-v1-test event: Registered Node node-v1-test in Controller
I0225 22:26:35.067] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0225 22:26:35.325] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 17 lines ...
I0225 22:26:37.058] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0225 22:26:37.155] (Bpod/valid-pod labeled
W0225 22:26:37.256] Edit cancelled, no changes made.
W0225 22:26:37.256] Edit cancelled, no changes made.
W0225 22:26:37.257] Edit cancelled, no changes made.
W0225 22:26:37.257] Edit cancelled, no changes made.
W0225 22:26:37.257] error: 'name' already has a value (valid-pod), and --overwrite is false
I0225 22:26:37.357] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0225 22:26:37.376] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:26:37.476] (Bpod "valid-pod" force deleted
W0225 22:26:37.577] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0225 22:26:37.678] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:26:37.678] (B+++ [0225 22:26:37] Creating namespace namespace-1551133597-10908
... skipping 83 lines ...
I0225 22:26:45.716] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0225 22:26:45.719] +++ working dir: /go/src/k8s.io/kubernetes
I0225 22:26:45.723] +++ command: run_kubectl_create_error_tests
I0225 22:26:45.738] +++ [0225 22:26:45] Creating namespace namespace-1551133605-26960
I0225 22:26:45.822] namespace/namespace-1551133605-26960 created
I0225 22:26:45.902] Context "test" modified.
I0225 22:26:45.910] +++ [0225 22:26:45] Testing kubectl create with error
W0225 22:26:46.011] Error: required flag(s) "filename" not set
W0225 22:26:46.011] 
W0225 22:26:46.011] 
W0225 22:26:46.012] Examples:
W0225 22:26:46.012]   # Create a pod using the data in pod.json.
W0225 22:26:46.012]   kubectl create -f ./pod.json
W0225 22:26:46.012]   
... skipping 38 lines ...
W0225 22:26:46.019]   kubectl create -f FILENAME [options]
W0225 22:26:46.019] 
W0225 22:26:46.019] Use "kubectl <command> --help" for more information about a given command.
W0225 22:26:46.019] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0225 22:26:46.019] 
W0225 22:26:46.019] required flag(s) "filename" not set
I0225 22:26:46.163] +++ [0225 22:26:46] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0225 22:26:46.263] kubectl convert is DEPRECATED and will be removed in a future version.
W0225 22:26:46.264] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0225 22:26:46.364] +++ exit code: 0
I0225 22:26:46.411] Recording: run_kubectl_apply_tests
I0225 22:26:46.411] Running command: run_kubectl_apply_tests
I0225 22:26:46.434] 
... skipping 20 lines ...
W0225 22:26:48.827] I0225 22:26:48.825861   44054 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0225 22:26:48.827] I0225 22:26:48.825898   44054 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0225 22:26:48.827] I0225 22:26:48.825962   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:26:48.828] I0225 22:26:48.826523   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:26:48.830] I0225 22:26:48.829458   44054 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0225 22:26:48.930] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0225 22:26:49.031] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0225 22:26:49.132] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0225 22:26:49.132] +++ exit code: 0
I0225 22:26:49.153] Recording: run_kubectl_run_tests
I0225 22:26:49.153] Running command: run_kubectl_run_tests
I0225 22:26:49.179] 
I0225 22:26:49.182] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 42 lines ...
I0225 22:26:49.871] Node-Selectors:   <none>
I0225 22:26:49.871] Tolerations:      <none>
I0225 22:26:49.871] Events:           <none>
I0225 22:26:49.962] (Bjob.batch "pi" deleted
I0225 22:26:50.073] Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: , got: pi-9lkcx:
I0225 22:26:50.076] 
I0225 22:26:50.083] run.sh:39: FAIL!
I0225 22:26:50.083] Get pods {{range.items}}{{.metadata.name}}:{{end}}
I0225 22:26:50.083]   Expected: 
I0225 22:26:50.083]   Got:      pi-9lkcx:
I0225 22:26:50.084] (B
I0225 22:26:50.084] 51 /go/src/k8s.io/kubernetes/hack/lib/test.sh
I0225 22:26:50.084] (B
I0225 22:26:50.141] +++ exit code: 1
I0225 22:26:50.149] +++ error: 1
I0225 22:26:50.193] Error when running run_kubectl_run_tests
I0225 22:26:50.193] Recording: run_kubectl_create_filter_tests
I0225 22:26:50.193] Running command: run_kubectl_create_filter_tests
I0225 22:26:50.221] 
I0225 22:26:50.224] +++ Running case: test-cmd.run_kubectl_create_filter_tests 
I0225 22:26:50.227] +++ working dir: /go/src/k8s.io/kubernetes
I0225 22:26:50.230] +++ command: run_kubectl_create_filter_tests
... skipping 9 lines ...
W0225 22:26:50.517] !!! [0225 22:26:50]  5: /go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:483 record_command(...)
W0225 22:26:50.517] !!! [0225 22:26:50]  6: hack/make-rules/test-cmd.sh:109 runTests(...)
I0225 22:26:50.618] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:26:50.704] (Bpod/selector-test-pod created
I0225 22:26:50.817] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0225 22:26:50.916] (BSuccessful
I0225 22:26:50.917] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0225 22:26:50.917] has:pods "selector-test-pod-dont-apply" not found
I0225 22:26:51.010] pod "selector-test-pod" deleted
I0225 22:26:51.038] +++ exit code: 0
I0225 22:26:51.087] Recording: run_kubectl_apply_deployments_tests
I0225 22:26:51.088] Running command: run_kubectl_apply_deployments_tests
I0225 22:26:51.117] 
... skipping 39 lines ...
W0225 22:26:54.140] I0225 22:26:54.050928   47432 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551133611-7778", Name:"nginx", UID:"7423e137-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"568", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-776cc67f78 to 3
W0225 22:26:54.141] I0225 22:26:54.055135   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133611-7778", Name:"nginx-776cc67f78", UID:"742554c2-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"569", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-mvxrk
W0225 22:26:54.142] I0225 22:26:54.060236   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133611-7778", Name:"nginx-776cc67f78", UID:"742554c2-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"569", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-mxrkt
W0225 22:26:54.142] I0225 22:26:54.062178   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133611-7778", Name:"nginx-776cc67f78", UID:"742554c2-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"569", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-whtwr
I0225 22:26:54.242] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0225 22:26:58.483] (BSuccessful
I0225 22:26:58.483] message:Error from server (Conflict): error when applying patch:
I0225 22:26:58.484] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1551133611-7778\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0225 22:26:58.484] to:
I0225 22:26:58.484] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0225 22:26:58.484] Name: "nginx", Namespace: "namespace-1551133611-7778"
I0225 22:26:58.486] Object: &{map["kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["name":"nginx" "uid":"7423e137-394c-11e9-bf9a-0242ac110002" "resourceVersion":"581" "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1551133611-7778\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "managedFields":[map["apiVersion":"apps/v1" "time":"2019-02-25T22:26:54Z" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[] "f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map["f:reason":map[] "f:status":map[] "f:type":map[] ".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[]]] "f:observedGeneration":map[]]] "manager":"kube-controller-manager" "operation":"Update"] map["manager":"kubectl" "operation":"Update" "apiVersion":"extensions/v1beta1" "time":"2019-02-25T22:26:54Z" "fields":map["f:metadata":map["f:annotations":map["f:kubectl.kubernetes.io/last-applied-configuration":map[] ".":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map["f:maxUnavailable":map[] ".":map[] "f:maxSurge":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[] "f:containers":map["k:{\"name\":\"nginx\"}":map["f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[] ".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map["f:protocol":map[] ".":map[] "f:containerPort":map[]]] "f:resources":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[]]]]]]] "namespace":"namespace-1551133611-7778" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1551133611-7778/deployments/nginx" "generation":'\x01' "creationTimestamp":"2019-02-25T22:26:54Z" "labels":map["name":"nginx"]] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647)] "status":map["conditions":[map["reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available" "status":"False" "lastUpdateTime":"2019-02-25T22:26:54Z" "lastTransitionTime":"2019-02-25T22:26:54Z"]] "observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03']]}
I0225 22:26:58.487] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0225 22:26:58.487] has:Error from server (Conflict)
W0225 22:26:59.999] I0225 22:26:59.998534   47432 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1551133602-8268
W0225 22:27:02.851] I0225 22:27:02.850674   44054 controller.go:606] quota admission added evaluator for: deployments.apps
I0225 22:27:03.808] deployment.extensions/nginx configured
W0225 22:27:03.909] I0225 22:27:03.814142   47432 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551133611-7778", Name:"nginx", UID:"79f6e379-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"604", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7bd4fbc645 to 3
W0225 22:27:03.909] I0225 22:27:03.819000   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133611-7778", Name:"nginx-7bd4fbc645", UID:"79f7e9af-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-v4mjd
W0225 22:27:03.910] I0225 22:27:03.823620   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133611-7778", Name:"nginx-7bd4fbc645", UID:"79f7e9af-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-xqfn6
... skipping 170 lines ...
I0225 22:27:11.546] +++ [0225 22:27:11] Creating namespace namespace-1551133631-22627
I0225 22:27:11.639] namespace/namespace-1551133631-22627 created
I0225 22:27:11.728] Context "test" modified.
I0225 22:27:11.737] +++ [0225 22:27:11] Testing kubectl get
I0225 22:27:11.838] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:27:11.944] (BSuccessful
I0225 22:27:11.944] message:Error from server (NotFound): pods "abc" not found
I0225 22:27:11.944] has:pods "abc" not found
I0225 22:27:12.079] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:27:12.211] (BSuccessful
I0225 22:27:12.212] message:Error from server (NotFound): pods "abc" not found
I0225 22:27:12.212] has:pods "abc" not found
I0225 22:27:12.316] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:27:12.452] (BSuccessful
I0225 22:27:12.453] message:{
I0225 22:27:12.453]     "apiVersion": "v1",
I0225 22:27:12.453]     "items": [],
... skipping 23 lines ...
I0225 22:27:13.016] has not:No resources found
I0225 22:27:13.172] Successful
I0225 22:27:13.173] message:NAME
I0225 22:27:13.173] has not:No resources found
I0225 22:27:13.320] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:27:13.490] (BSuccessful
I0225 22:27:13.491] message:error: the server doesn't have a resource type "foobar"
I0225 22:27:13.491] has not:No resources found
I0225 22:27:13.651] Successful
I0225 22:27:13.651] message:No resources found.
I0225 22:27:13.652] has:No resources found
I0225 22:27:13.800] Successful
I0225 22:27:13.801] message:
I0225 22:27:13.802] has not:No resources found
I0225 22:27:13.939] Successful
I0225 22:27:13.939] message:No resources found.
I0225 22:27:13.939] has:No resources found
I0225 22:27:14.090] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:27:14.222] (BSuccessful
I0225 22:27:14.223] message:Error from server (NotFound): pods "abc" not found
I0225 22:27:14.223] has:pods "abc" not found
I0225 22:27:14.225] FAIL!
I0225 22:27:14.225] message:Error from server (NotFound): pods "abc" not found
I0225 22:27:14.225] has not:List
I0225 22:27:14.226] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0225 22:27:14.428] Successful
I0225 22:27:14.429] message:I0225 22:27:14.335517   58323 loader.go:359] Config loaded from file /tmp/tmp.zFGdFsERfb/.kube/config
I0225 22:27:14.429] I0225 22:27:14.337097   58323 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0225 22:27:14.429] I0225 22:27:14.394279   58323 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 701 lines ...
I0225 22:27:19.004] }
I0225 22:27:19.138] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:27:19.577] (B<no value>Successful
I0225 22:27:19.578] message:valid-pod:
I0225 22:27:19.578] has:valid-pod:
I0225 22:27:19.713] Successful
I0225 22:27:19.713] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0225 22:27:19.714] 	template was:
I0225 22:27:19.714] 		{.missing}
I0225 22:27:19.714] 	object given to jsonpath engine was:
I0225 22:27:19.716] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"managedFields":[]interface {}{map[string]interface {}{"fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:terminationGracePeriodSeconds":map[string]interface {}{}, "f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{"f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}, ".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "apiVersion":"v1", "time":"2019-02-25T22:27:18Z"}}, "name":"valid-pod", "namespace":"namespace-1551133638-20161", "selfLink":"/api/v1/namespaces/namespace-1551133638-20161/pods/valid-pod", "uid":"82ecf5f8-394c-11e9-bf9a-0242ac110002", "resourceVersion":"678", "creationTimestamp":"2019-02-25T22:27:18Z", "labels":map[string]interface {}{"name":"valid-pod"}}, "spec":map[string]interface {}{"enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0225 22:27:19.717] has:missing is not found
W0225 22:27:19.839] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0225 22:27:19.940] Successful
I0225 22:27:19.940] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0225 22:27:19.941] 	template was:
I0225 22:27:19.941] 		{{.missing}}
I0225 22:27:19.941] 	raw data was:
I0225 22:27:19.943] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-02-25T22:27:18Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-02-25T22:27:18Z"}],"name":"valid-pod","namespace":"namespace-1551133638-20161","resourceVersion":"678","selfLink":"/api/v1/namespaces/namespace-1551133638-20161/pods/valid-pod","uid":"82ecf5f8-394c-11e9-bf9a-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0225 22:27:19.943] 	object given to template engine was:
I0225 22:27:19.945] 		map[apiVersion:v1 kind:Pod metadata:map[selfLink:/api/v1/namespaces/namespace-1551133638-20161/pods/valid-pod uid:82ecf5f8-394c-11e9-bf9a-0242ac110002 creationTimestamp:2019-02-25T22:27:18Z labels:map[name:valid-pod] managedFields:[map[fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:terminationGracePeriodSeconds:map[] f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[f:imagePullPolicy:map[] f:name:map[] f:resources:map[f:requests:map[.:map[] f:cpu:map[] f:memory:map[]] .:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[] .:map[] f:image:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[]]] manager:kubectl operation:Update time:2019-02-25T22:27:18Z apiVersion:v1]] name:valid-pod namespace:namespace-1551133638-20161 resourceVersion:678] spec:map[terminationGracePeriodSeconds:30 containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[requests:map[cpu:1 memory:512Mi] limits:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[]] status:map[qosClass:Guaranteed phase:Pending]]
... skipping 159 lines ...
I0225 22:27:23.343]   terminationGracePeriodSeconds: 30
I0225 22:27:23.343] status:
I0225 22:27:23.343]   phase: Pending
I0225 22:27:23.343]   qosClass: Guaranteed
I0225 22:27:23.343] has:name: valid-pod
I0225 22:27:23.347] Successful
I0225 22:27:23.348] message:Error from server (NotFound): pods "invalid-pod" not found
I0225 22:27:23.348] has:"invalid-pod" not found
I0225 22:27:23.454] pod "valid-pod" deleted
I0225 22:27:23.587] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:27:23.813] (Bpod/redis-master created
I0225 22:27:23.818] pod/valid-pod created
I0225 22:27:23.953] Successful
... skipping 254 lines ...
I0225 22:27:30.256] Running command: run_create_secret_tests
I0225 22:27:30.280] 
I0225 22:27:30.283] +++ Running case: test-cmd.run_create_secret_tests 
I0225 22:27:30.286] +++ working dir: /go/src/k8s.io/kubernetes
I0225 22:27:30.288] +++ command: run_create_secret_tests
I0225 22:27:30.389] Successful
I0225 22:27:30.390] message:Error from server (NotFound): secrets "mysecret" not found
I0225 22:27:30.390] has:secrets "mysecret" not found
I0225 22:27:30.561] Successful
I0225 22:27:30.561] message:Error from server (NotFound): secrets "mysecret" not found
I0225 22:27:30.561] has:secrets "mysecret" not found
I0225 22:27:30.563] Successful
I0225 22:27:30.564] message:user-specified
I0225 22:27:30.564] has:user-specified
I0225 22:27:30.641] Successful
I0225 22:27:30.722] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"8a01844c-394c-11e9-bf9a-0242ac110002","resourceVersion":"754","creationTimestamp":"2019-02-25T22:27:30Z"}}
... skipping 147 lines ...
I0225 22:27:36.386] has:Timeout exceeded while reading body
I0225 22:27:36.484] Successful
I0225 22:27:36.484] message:NAME        READY   STATUS    RESTARTS   AGE
I0225 22:27:36.484] valid-pod   0/1     Pending   0          2s
I0225 22:27:36.484] has:valid-pod
I0225 22:27:36.566] Successful
I0225 22:27:36.567] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0225 22:27:36.567] has:Invalid timeout value
I0225 22:27:36.667] pod "valid-pod" deleted
I0225 22:27:36.692] +++ exit code: 0
I0225 22:27:37.359] Recording: run_crd_tests
I0225 22:27:37.359] Running command: run_crd_tests
I0225 22:27:37.390] 
... skipping 237 lines ...
I0225 22:27:42.599] foo.company.com/test patched
I0225 22:27:42.708] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0225 22:27:42.805] (Bfoo.company.com/test patched
I0225 22:27:42.915] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0225 22:27:43.016] (Bfoo.company.com/test patched
I0225 22:27:43.130] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0225 22:27:43.319] (B+++ [0225 22:27:43] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0225 22:27:43.409] {
I0225 22:27:43.409]     "apiVersion": "company.com/v1",
I0225 22:27:43.409]     "kind": "Foo",
I0225 22:27:43.410]     "metadata": {
I0225 22:27:43.410]         "annotations": {
I0225 22:27:43.410]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 257 lines ...
I0225 22:27:45.119] bar.company.com/test patched
I0225 22:27:45.123] Successful
I0225 22:27:45.123] message:bar.company.com/test
I0225 22:27:45.123] has:bar.company.com/test
I0225 22:27:45.213] bar.company.com "test" deleted
W0225 22:27:45.314] /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 295: 61404 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
W0225 22:27:51.611] E0225 22:27:51.610055   47432 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos"]
W0225 22:27:52.122] I0225 22:27:52.121798   47432 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0225 22:27:52.123] I0225 22:27:52.123127   44054 clientconn.go:551] parsed scheme: ""
W0225 22:27:52.124] I0225 22:27:52.123175   44054 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0225 22:27:52.124] I0225 22:27:52.123210   44054 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0225 22:27:52.124] I0225 22:27:52.123246   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:27:52.124] I0225 22:27:52.123790   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 65 lines ...
I0225 22:27:59.028] (Bnamespace/non-native-resources created
I0225 22:27:59.223] bar.company.com/test created
I0225 22:27:59.332] crd.sh:456: Successful get bars {{len .items}}: 1
I0225 22:27:59.418] (Bnamespace "non-native-resources" deleted
I0225 22:28:04.691] crd.sh:459: Successful get bars {{len .items}}: 0
I0225 22:28:04.869] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0225 22:28:04.970] Error from server (NotFound): namespaces "non-native-resources" not found
I0225 22:28:05.071] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0225 22:28:05.089] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0225 22:28:05.203] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0225 22:28:05.244] +++ exit code: 0
I0225 22:28:05.318] Recording: run_cmd_with_img_tests
I0225 22:28:05.318] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0225 22:28:05.639] I0225 22:28:05.639214   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133685-24627", Name:"test1-848d5d4b47", UID:"9ed07592-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"879", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-848d5d4b47-kl2n5
I0225 22:28:05.740] Successful
I0225 22:28:05.740] message:deployment.apps/test1 created
I0225 22:28:05.741] has:deployment.apps/test1 created
I0225 22:28:05.741] deployment.extensions "test1" deleted
I0225 22:28:05.817] Successful
I0225 22:28:05.817] message:error: Invalid image name "InvalidImageName": invalid reference format
I0225 22:28:05.817] has:error: Invalid image name "InvalidImageName": invalid reference format
I0225 22:28:05.831] +++ exit code: 0
I0225 22:28:05.878] +++ [0225 22:28:05] Testing recursive resources
I0225 22:28:05.885] +++ [0225 22:28:05] Creating namespace namespace-1551133685-5916
I0225 22:28:05.966] namespace/namespace-1551133685-5916 created
I0225 22:28:06.042] Context "test" modified.
I0225 22:28:06.139] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:28:06.433] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:06.435] (BSuccessful
I0225 22:28:06.436] message:pod/busybox0 created
I0225 22:28:06.436] pod/busybox1 created
I0225 22:28:06.436] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0225 22:28:06.436] has:error validating data: kind not set
I0225 22:28:06.539] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:06.749] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0225 22:28:06.752] (BSuccessful
I0225 22:28:06.753] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:28:06.753] has:Object 'Kind' is missing
I0225 22:28:06.865] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:07.176] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0225 22:28:07.179] (BSuccessful
I0225 22:28:07.179] message:pod/busybox0 replaced
I0225 22:28:07.179] pod/busybox1 replaced
I0225 22:28:07.180] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0225 22:28:07.180] has:error validating data: kind not set
I0225 22:28:07.290] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:07.408] (BSuccessful
I0225 22:28:07.409] message:Name:               busybox0
I0225 22:28:07.409] Namespace:          namespace-1551133685-5916
I0225 22:28:07.409] Priority:           0
I0225 22:28:07.409] PriorityClassName:  <none>
... skipping 159 lines ...
I0225 22:28:07.429] has:Object 'Kind' is missing
I0225 22:28:07.524] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:07.748] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0225 22:28:07.751] (BSuccessful
I0225 22:28:07.751] message:pod/busybox0 annotated
I0225 22:28:07.751] pod/busybox1 annotated
I0225 22:28:07.752] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:28:07.752] has:Object 'Kind' is missing
I0225 22:28:07.858] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:08.168] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0225 22:28:08.172] (BSuccessful
I0225 22:28:08.172] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0225 22:28:08.172] pod/busybox0 configured
I0225 22:28:08.173] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0225 22:28:08.173] pod/busybox1 configured
I0225 22:28:08.173] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0225 22:28:08.173] has:error validating data: kind not set
I0225 22:28:08.279] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:28:08.460] (Bdeployment.apps/nginx created
W0225 22:28:08.561] I0225 22:28:08.467816   47432 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551133685-5916", Name:"nginx", UID:"a0802527-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"903", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5f7cff5b56 to 3
W0225 22:28:08.562] I0225 22:28:08.473329   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133685-5916", Name:"nginx-5f7cff5b56", UID:"a0812b94-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"904", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-q7snl
W0225 22:28:08.562] I0225 22:28:08.478260   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133685-5916", Name:"nginx-5f7cff5b56", UID:"a0812b94-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"904", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-q47g7
W0225 22:28:08.563] I0225 22:28:08.478582   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133685-5916", Name:"nginx-5f7cff5b56", UID:"a0812b94-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"904", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-78ktc
... skipping 48 lines ...
W0225 22:28:09.082] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0225 22:28:09.182] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:09.294] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:09.297] (BSuccessful
I0225 22:28:09.298] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0225 22:28:09.298] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0225 22:28:09.298] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:28:09.299] has:Object 'Kind' is missing
I0225 22:28:09.410] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:09.515] (BSuccessful
I0225 22:28:09.515] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:28:09.516] has:busybox0:busybox1:
I0225 22:28:09.518] Successful
I0225 22:28:09.518] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:28:09.518] has:Object 'Kind' is missing
W0225 22:28:09.619] I0225 22:28:09.584320   47432 namespace_controller.go:171] Namespace has been deleted non-native-resources
I0225 22:28:09.720] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:09.746] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:28:09.854] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0225 22:28:09.858] (BSuccessful
I0225 22:28:09.858] message:pod/busybox0 labeled
I0225 22:28:09.858] pod/busybox1 labeled
I0225 22:28:09.858] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:28:09.859] has:Object 'Kind' is missing
I0225 22:28:09.969] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:10.081] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:28:10.188] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0225 22:28:10.190] (BSuccessful
I0225 22:28:10.190] message:pod/busybox0 patched
I0225 22:28:10.190] pod/busybox1 patched
I0225 22:28:10.191] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:28:10.191] has:Object 'Kind' is missing
I0225 22:28:10.303] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:10.526] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:28:10.529] (BSuccessful
I0225 22:28:10.530] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0225 22:28:10.530] pod "busybox0" force deleted
I0225 22:28:10.530] pod "busybox1" force deleted
I0225 22:28:10.530] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:28:10.530] has:Object 'Kind' is missing
I0225 22:28:10.641] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:28:10.830] (Breplicationcontroller/busybox0 created
I0225 22:28:10.835] replicationcontroller/busybox1 created
W0225 22:28:10.936] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0225 22:28:10.937] I0225 22:28:10.835615   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551133685-5916", Name:"busybox0", UID:"a1e9be29-394c-11e9-bf9a-0242ac110002", APIVersion:"v1", ResourceVersion:"934", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-zwtpb
W0225 22:28:10.937] I0225 22:28:10.842424   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551133685-5916", Name:"busybox1", UID:"a1eabc68-394c-11e9-bf9a-0242ac110002", APIVersion:"v1", ResourceVersion:"936", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-4t7c7
I0225 22:28:11.038] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:11.071] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:11.178] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0225 22:28:11.287] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0225 22:28:11.508] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0225 22:28:11.613] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0225 22:28:11.615] (BSuccessful
I0225 22:28:11.616] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0225 22:28:11.616] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0225 22:28:11.616] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:28:11.617] has:Object 'Kind' is missing
I0225 22:28:11.707] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0225 22:28:11.806] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0225 22:28:11.920] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:12.026] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0225 22:28:12.133] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0225 22:28:12.363] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0225 22:28:12.470] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0225 22:28:12.473] (BSuccessful
I0225 22:28:12.473] message:service/busybox0 exposed
I0225 22:28:12.473] service/busybox1 exposed
I0225 22:28:12.474] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:28:12.474] has:Object 'Kind' is missing
I0225 22:28:12.582] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:12.688] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0225 22:28:12.796] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0225 22:28:13.035] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0225 22:28:13.149] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0225 22:28:13.151] (BSuccessful
I0225 22:28:13.151] message:replicationcontroller/busybox0 scaled
I0225 22:28:13.152] replicationcontroller/busybox1 scaled
I0225 22:28:13.152] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:28:13.153] has:Object 'Kind' is missing
W0225 22:28:13.253] I0225 22:28:12.907539   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551133685-5916", Name:"busybox0", UID:"a1e9be29-394c-11e9-bf9a-0242ac110002", APIVersion:"v1", ResourceVersion:"955", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-4vdtr
W0225 22:28:13.254] I0225 22:28:12.921638   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551133685-5916", Name:"busybox1", UID:"a1eabc68-394c-11e9-bf9a-0242ac110002", APIVersion:"v1", ResourceVersion:"959", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-xmtfz
I0225 22:28:13.354] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:13.490] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:28:13.493] (BSuccessful
I0225 22:28:13.494] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0225 22:28:13.494] replicationcontroller "busybox0" force deleted
I0225 22:28:13.494] replicationcontroller "busybox1" force deleted
I0225 22:28:13.495] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:28:13.495] has:Object 'Kind' is missing
I0225 22:28:13.598] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:28:13.784] (Bdeployment.apps/nginx1-deployment created
I0225 22:28:13.792] deployment.apps/nginx0-deployment created
W0225 22:28:13.893] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0225 22:28:13.893] I0225 22:28:13.792096   47432 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551133685-5916", Name:"nginx1-deployment", UID:"a3ac6239-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"977", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7c76c6cbb8 to 2
W0225 22:28:13.894] I0225 22:28:13.797592   47432 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551133685-5916", Name:"nginx0-deployment", UID:"a3ad950d-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"979", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-7bb85585d7 to 2
W0225 22:28:13.894] I0225 22:28:13.798056   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133685-5916", Name:"nginx1-deployment-7c76c6cbb8", UID:"a3ad9245-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"978", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-q84t8
W0225 22:28:13.894] I0225 22:28:13.804790   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133685-5916", Name:"nginx0-deployment-7bb85585d7", UID:"a3aea24a-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"982", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-84hwk
W0225 22:28:13.895] I0225 22:28:13.804858   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133685-5916", Name:"nginx1-deployment-7c76c6cbb8", UID:"a3ad9245-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"978", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-qrjfj
W0225 22:28:13.895] I0225 22:28:13.813931   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133685-5916", Name:"nginx0-deployment-7bb85585d7", UID:"a3aea24a-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"982", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-ftzwq
I0225 22:28:13.996] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0225 22:28:14.049] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0225 22:28:14.325] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0225 22:28:14.328] (BSuccessful
I0225 22:28:14.328] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0225 22:28:14.329] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0225 22:28:14.329] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0225 22:28:14.329] has:Object 'Kind' is missing
I0225 22:28:14.444] deployment.apps/nginx1-deployment paused
I0225 22:28:14.453] deployment.apps/nginx0-deployment paused
I0225 22:28:14.586] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0225 22:28:14.589] (BSuccessful
I0225 22:28:14.589] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0225 22:28:15.024] 1         <none>
I0225 22:28:15.024] 
I0225 22:28:15.024] deployment.apps/nginx0-deployment 
I0225 22:28:15.025] REVISION  CHANGE-CAUSE
I0225 22:28:15.025] 1         <none>
I0225 22:28:15.025] 
I0225 22:28:15.025] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0225 22:28:15.025] has:nginx0-deployment
I0225 22:28:15.026] Successful
I0225 22:28:15.026] message:deployment.apps/nginx1-deployment 
I0225 22:28:15.026] REVISION  CHANGE-CAUSE
I0225 22:28:15.026] 1         <none>
I0225 22:28:15.026] 
I0225 22:28:15.026] deployment.apps/nginx0-deployment 
I0225 22:28:15.026] REVISION  CHANGE-CAUSE
I0225 22:28:15.026] 1         <none>
I0225 22:28:15.027] 
I0225 22:28:15.027] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0225 22:28:15.027] has:nginx1-deployment
I0225 22:28:15.030] Successful
I0225 22:28:15.030] message:deployment.apps/nginx1-deployment 
I0225 22:28:15.030] REVISION  CHANGE-CAUSE
I0225 22:28:15.030] 1         <none>
I0225 22:28:15.030] 
I0225 22:28:15.031] deployment.apps/nginx0-deployment 
I0225 22:28:15.031] REVISION  CHANGE-CAUSE
I0225 22:28:15.031] 1         <none>
I0225 22:28:15.031] 
I0225 22:28:15.031] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0225 22:28:15.032] has:Object 'Kind' is missing
I0225 22:28:15.121] deployment.apps "nginx1-deployment" force deleted
I0225 22:28:15.128] deployment.apps "nginx0-deployment" force deleted
W0225 22:28:15.229] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0225 22:28:15.230] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0225 22:28:16.242] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:28:16.426] (Breplicationcontroller/busybox0 created
I0225 22:28:16.432] replicationcontroller/busybox1 created
W0225 22:28:16.533] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0225 22:28:16.534] I0225 22:28:16.432334   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551133685-5916", Name:"busybox0", UID:"a53f9b91-394c-11e9-bf9a-0242ac110002", APIVersion:"v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-btcrs
W0225 22:28:16.534] I0225 22:28:16.438229   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551133685-5916", Name:"busybox1", UID:"a540c0d6-394c-11e9-bf9a-0242ac110002", APIVersion:"v1", ResourceVersion:"1028", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-tqd5s
I0225 22:28:16.635] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:28:16.671] (BSuccessful
I0225 22:28:16.671] message:no rollbacker has been implemented for "ReplicationController"
I0225 22:28:16.671] no rollbacker has been implemented for "ReplicationController"
... skipping 3 lines ...
I0225 22:28:16.674] message:no rollbacker has been implemented for "ReplicationController"
I0225 22:28:16.674] no rollbacker has been implemented for "ReplicationController"
I0225 22:28:16.675] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:28:16.675] has:Object 'Kind' is missing
I0225 22:28:16.783] Successful
I0225 22:28:16.784] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:28:16.784] error: replicationcontrollers "busybox0" pausing is not supported
I0225 22:28:16.784] error: replicationcontrollers "busybox1" pausing is not supported
I0225 22:28:16.784] has:Object 'Kind' is missing
I0225 22:28:16.786] Successful
I0225 22:28:16.787] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:28:16.787] error: replicationcontrollers "busybox0" pausing is not supported
I0225 22:28:16.788] error: replicationcontrollers "busybox1" pausing is not supported
I0225 22:28:16.788] has:replicationcontrollers "busybox0" pausing is not supported
I0225 22:28:16.789] Successful
I0225 22:28:16.790] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:28:16.790] error: replicationcontrollers "busybox0" pausing is not supported
I0225 22:28:16.790] error: replicationcontrollers "busybox1" pausing is not supported
I0225 22:28:16.791] has:replicationcontrollers "busybox1" pausing is not supported
I0225 22:28:16.902] Successful
I0225 22:28:16.903] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:28:16.903] error: replicationcontrollers "busybox0" resuming is not supported
I0225 22:28:16.903] error: replicationcontrollers "busybox1" resuming is not supported
I0225 22:28:16.903] has:Object 'Kind' is missing
I0225 22:28:16.905] Successful
I0225 22:28:16.906] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:28:16.906] error: replicationcontrollers "busybox0" resuming is not supported
I0225 22:28:16.906] error: replicationcontrollers "busybox1" resuming is not supported
I0225 22:28:16.906] has:replicationcontrollers "busybox0" resuming is not supported
I0225 22:28:16.908] Successful
I0225 22:28:16.909] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:28:16.909] error: replicationcontrollers "busybox0" resuming is not supported
I0225 22:28:16.909] error: replicationcontrollers "busybox1" resuming is not supported
I0225 22:28:16.909] has:replicationcontrollers "busybox0" resuming is not supported
I0225 22:28:16.999] replicationcontroller "busybox0" force deleted
I0225 22:28:17.007] replicationcontroller "busybox1" force deleted
W0225 22:28:17.107] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0225 22:28:17.108] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:28:18.015] Recording: run_namespace_tests
I0225 22:28:18.015] Running command: run_namespace_tests
I0225 22:28:18.039] 
I0225 22:28:18.042] +++ Running case: test-cmd.run_namespace_tests 
I0225 22:28:18.045] +++ working dir: /go/src/k8s.io/kubernetes
I0225 22:28:18.048] +++ command: run_namespace_tests
I0225 22:28:18.059] +++ [0225 22:28:18] Testing kubectl(v1:namespaces)
I0225 22:28:18.143] namespace/my-namespace created
I0225 22:28:18.255] core.sh:1321: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0225 22:28:18.366] (Bnamespace "my-namespace" deleted
W0225 22:28:21.764] E0225 22:28:21.763331   47432 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0225 22:28:22.376] I0225 22:28:22.375075   47432 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0225 22:28:22.476] I0225 22:28:22.475638   47432 controller_utils.go:1028] Caches are synced for garbage collector controller
I0225 22:28:23.564] namespace/my-namespace condition met
I0225 22:28:23.673] Successful
I0225 22:28:23.673] message:Error from server (NotFound): namespaces "my-namespace" not found
I0225 22:28:23.673] has: not found
I0225 22:28:23.801] core.sh:1336: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0225 22:28:23.890] (Bnamespace/other created
I0225 22:28:24.005] core.sh:1340: Successful get namespaces/other {{.metadata.name}}: other
I0225 22:28:24.115] (Bcore.sh:1344: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:28:24.295] (Bpod/valid-pod created
I0225 22:28:24.417] core.sh:1348: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:28:24.526] (Bcore.sh:1350: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:28:24.631] (BSuccessful
I0225 22:28:24.631] message:error: a resource cannot be retrieved by name across all namespaces
I0225 22:28:24.631] has:a resource cannot be retrieved by name across all namespaces
I0225 22:28:24.740] core.sh:1357: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:28:24.835] (Bpod "valid-pod" force deleted
W0225 22:28:24.936] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0225 22:28:25.037] core.sh:1361: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:28:25.368] (Bnamespace "other" deleted
... skipping 115 lines ...
I0225 22:28:48.618] +++ command: run_client_config_tests
I0225 22:28:48.635] +++ [0225 22:28:48] Creating namespace namespace-1551133728-3498
I0225 22:28:48.727] namespace/namespace-1551133728-3498 created
I0225 22:28:48.805] Context "test" modified.
I0225 22:28:48.813] +++ [0225 22:28:48] Testing client config
I0225 22:28:48.893] Successful
I0225 22:28:48.893] message:error: stat missing: no such file or directory
I0225 22:28:48.893] has:missing: no such file or directory
I0225 22:28:48.975] Successful
I0225 22:28:48.975] message:error: stat missing: no such file or directory
I0225 22:28:48.975] has:missing: no such file or directory
I0225 22:28:49.054] Successful
I0225 22:28:49.055] message:error: stat missing: no such file or directory
I0225 22:28:49.055] has:missing: no such file or directory
I0225 22:28:49.135] Successful
I0225 22:28:49.136] message:Error in configuration: context was not found for specified context: missing-context
I0225 22:28:49.136] has:context was not found for specified context: missing-context
I0225 22:28:49.217] Successful
I0225 22:28:49.217] message:error: no server found for cluster "missing-cluster"
I0225 22:28:49.218] has:no server found for cluster "missing-cluster"
I0225 22:28:49.301] Successful
I0225 22:28:49.301] message:error: auth info "missing-user" does not exist
I0225 22:28:49.301] has:auth info "missing-user" does not exist
I0225 22:28:49.466] Successful
I0225 22:28:49.467] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0225 22:28:49.467] has:Error loading config file
I0225 22:28:49.549] Successful
I0225 22:28:49.550] message:error: stat missing-config: no such file or directory
I0225 22:28:49.550] has:no such file or directory
I0225 22:28:49.570] +++ exit code: 0
I0225 22:28:49.628] Recording: run_service_accounts_tests
I0225 22:28:49.628] Running command: run_service_accounts_tests
I0225 22:28:49.657] 
I0225 22:28:49.660] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 36 lines ...
I0225 22:28:56.814] Labels:                        run=pi
I0225 22:28:56.814] Annotations:                   <none>
I0225 22:28:56.814] Schedule:                      59 23 31 2 *
I0225 22:28:56.814] Concurrency Policy:            Allow
I0225 22:28:56.815] Suspend:                       False
I0225 22:28:56.815] Successful Job History Limit:  824642325160
I0225 22:28:56.815] Failed Job History Limit:      1
I0225 22:28:56.815] Starting Deadline Seconds:     <unset>
I0225 22:28:56.815] Selector:                      <unset>
I0225 22:28:56.815] Parallelism:                   <unset>
I0225 22:28:56.815] Completions:                   <unset>
I0225 22:28:56.815] Pod Template:
I0225 22:28:56.816]   Labels:  run=pi
... skipping 32 lines ...
I0225 22:28:57.424]                 job-name=test-job
I0225 22:28:57.424]                 run=pi
I0225 22:28:57.424] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0225 22:28:57.424] Parallelism:    1
I0225 22:28:57.424] Completions:    1
I0225 22:28:57.424] Start Time:     Mon, 25 Feb 2019 22:28:57 +0000
I0225 22:28:57.425] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0225 22:28:57.425] Pod Template:
I0225 22:28:57.425]   Labels:  controller-uid=bd7f03b6-394c-11e9-bf9a-0242ac110002
I0225 22:28:57.425]            job-name=test-job
I0225 22:28:57.425]            run=pi
I0225 22:28:57.425]   Containers:
I0225 22:28:57.425]    pi:
... skipping 388 lines ...
I0225 22:29:08.094]   selector:
I0225 22:29:08.094]     role: padawan
I0225 22:29:08.095]   sessionAffinity: None
I0225 22:29:08.095]   type: ClusterIP
I0225 22:29:08.095] status:
I0225 22:29:08.095]   loadBalancer: {}
W0225 22:29:08.195] error: you must specify resources by --filename when --local is set.
W0225 22:29:08.196] Example resource specifications include:
W0225 22:29:08.196]    '-f rsrc.yaml'
W0225 22:29:08.196]    '--filename=rsrc.json'
I0225 22:29:08.297] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0225 22:29:08.487] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0225 22:29:08.584] (Bservice "redis-master" deleted
... skipping 104 lines ...
I0225 22:29:16.407]   Volumes:	<none>
I0225 22:29:16.407]  (dry run)
I0225 22:29:16.525] apps.sh:79: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0225 22:29:16.635] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0225 22:29:16.751] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0225 22:29:16.880] (Bdaemonset.extensions/bind rolled back
W0225 22:29:16.984] E0225 22:29:16.913427   47432 daemon_controller.go:302] namespace-1551133754-32355/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1551133754-32355", SelfLink:"/apis/apps/v1/namespaces/namespace-1551133754-32355/daemonsets/bind", UID:"c842bee0-394c-11e9-bf9a-0242ac110002", ResourceVersion:"1279", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63686730555, loc:(*time.Location)(0x6a5f460)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1551133754-32355\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc003c9c660), Fields:(*v1.Fields)(0xc003121c28)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc003c9c7a0), Fields:(*v1.Fields)(0xc003121c78)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc003c9cf40), Fields:(*v1.Fields)(0xc003121d18)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc003c9d040), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004399a58), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003e77980), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc003c9d080), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc003121d70)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc004399ad0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
I0225 22:29:17.085] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0225 22:29:17.117] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0225 22:29:17.244] (BSuccessful
I0225 22:29:17.245] message:error: unable to find specified revision 1000000 in history
I0225 22:29:17.245] has:unable to find specified revision
I0225 22:29:17.355] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0225 22:29:17.468] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0225 22:29:17.596] (Bdaemonset.extensions/bind rolled back
I0225 22:29:17.711] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0225 22:29:17.826] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 28 lines ...
I0225 22:29:19.459] Namespace:    namespace-1551133758-55
I0225 22:29:19.459] Selector:     app=guestbook,tier=frontend
I0225 22:29:19.459] Labels:       app=guestbook
I0225 22:29:19.459]               tier=frontend
I0225 22:29:19.459] Annotations:  <none>
I0225 22:29:19.459] Replicas:     3 current / 3 desired
I0225 22:29:19.460] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:19.460] Pod Template:
I0225 22:29:19.460]   Labels:  app=guestbook
I0225 22:29:19.460]            tier=frontend
I0225 22:29:19.460]   Containers:
I0225 22:29:19.460]    php-redis:
I0225 22:29:19.460]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0225 22:29:19.596] Namespace:    namespace-1551133758-55
I0225 22:29:19.596] Selector:     app=guestbook,tier=frontend
I0225 22:29:19.596] Labels:       app=guestbook
I0225 22:29:19.597]               tier=frontend
I0225 22:29:19.597] Annotations:  <none>
I0225 22:29:19.597] Replicas:     3 current / 3 desired
I0225 22:29:19.597] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:19.597] Pod Template:
I0225 22:29:19.598]   Labels:  app=guestbook
I0225 22:29:19.598]            tier=frontend
I0225 22:29:19.598]   Containers:
I0225 22:29:19.598]    php-redis:
I0225 22:29:19.598]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0225 22:29:19.753] Namespace:    namespace-1551133758-55
I0225 22:29:19.753] Selector:     app=guestbook,tier=frontend
I0225 22:29:19.753] Labels:       app=guestbook
I0225 22:29:19.753]               tier=frontend
I0225 22:29:19.753] Annotations:  <none>
I0225 22:29:19.754] Replicas:     3 current / 3 desired
I0225 22:29:19.754] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:19.754] Pod Template:
I0225 22:29:19.754]   Labels:  app=guestbook
I0225 22:29:19.754]            tier=frontend
I0225 22:29:19.754]   Containers:
I0225 22:29:19.754]    php-redis:
I0225 22:29:19.754]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0225 22:29:19.894] Namespace:    namespace-1551133758-55
I0225 22:29:19.895] Selector:     app=guestbook,tier=frontend
I0225 22:29:19.895] Labels:       app=guestbook
I0225 22:29:19.895]               tier=frontend
I0225 22:29:19.895] Annotations:  <none>
I0225 22:29:19.895] Replicas:     3 current / 3 desired
I0225 22:29:19.895] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:19.895] Pod Template:
I0225 22:29:19.896]   Labels:  app=guestbook
I0225 22:29:19.896]            tier=frontend
I0225 22:29:19.896]   Containers:
I0225 22:29:19.896]    php-redis:
I0225 22:29:19.896]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0225 22:29:20.076] Namespace:    namespace-1551133758-55
I0225 22:29:20.077] Selector:     app=guestbook,tier=frontend
I0225 22:29:20.077] Labels:       app=guestbook
I0225 22:29:20.077]               tier=frontend
I0225 22:29:20.077] Annotations:  <none>
I0225 22:29:20.077] Replicas:     3 current / 3 desired
I0225 22:29:20.078] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:20.078] Pod Template:
I0225 22:29:20.078]   Labels:  app=guestbook
I0225 22:29:20.078]            tier=frontend
I0225 22:29:20.078]   Containers:
I0225 22:29:20.078]    php-redis:
I0225 22:29:20.078]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0225 22:29:20.209] Namespace:    namespace-1551133758-55
I0225 22:29:20.209] Selector:     app=guestbook,tier=frontend
I0225 22:29:20.209] Labels:       app=guestbook
I0225 22:29:20.209]               tier=frontend
I0225 22:29:20.209] Annotations:  <none>
I0225 22:29:20.209] Replicas:     3 current / 3 desired
I0225 22:29:20.210] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:20.210] Pod Template:
I0225 22:29:20.210]   Labels:  app=guestbook
I0225 22:29:20.210]            tier=frontend
I0225 22:29:20.210]   Containers:
I0225 22:29:20.210]    php-redis:
I0225 22:29:20.210]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0225 22:29:20.333] Namespace:    namespace-1551133758-55
I0225 22:29:20.333] Selector:     app=guestbook,tier=frontend
I0225 22:29:20.334] Labels:       app=guestbook
I0225 22:29:20.334]               tier=frontend
I0225 22:29:20.334] Annotations:  <none>
I0225 22:29:20.334] Replicas:     3 current / 3 desired
I0225 22:29:20.334] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:20.334] Pod Template:
I0225 22:29:20.335]   Labels:  app=guestbook
I0225 22:29:20.335]            tier=frontend
I0225 22:29:20.335]   Containers:
I0225 22:29:20.335]    php-redis:
I0225 22:29:20.335]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0225 22:29:20.458] Namespace:    namespace-1551133758-55
I0225 22:29:20.458] Selector:     app=guestbook,tier=frontend
I0225 22:29:20.458] Labels:       app=guestbook
I0225 22:29:20.459]               tier=frontend
I0225 22:29:20.459] Annotations:  <none>
I0225 22:29:20.459] Replicas:     3 current / 3 desired
I0225 22:29:20.459] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:20.459] Pod Template:
I0225 22:29:20.459]   Labels:  app=guestbook
I0225 22:29:20.459]            tier=frontend
I0225 22:29:20.459]   Containers:
I0225 22:29:20.460]    php-redis:
I0225 22:29:20.460]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
W0225 22:29:20.774] I0225 22:29:20.685652   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551133758-55", Name:"frontend", UID:"caa33f44-394c-11e9-bf9a-0242ac110002", APIVersion:"v1", ResourceVersion:"1317", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-gpsk2
I0225 22:29:20.875] core.sh:1071: Successful get rc frontend {{.spec.replicas}}: 2
I0225 22:29:20.897] (Bcore.sh:1075: Successful get rc frontend {{.spec.replicas}}: 2
I0225 22:29:21.107] (Bcore.sh:1079: Successful get rc frontend {{.spec.replicas}}: 2
I0225 22:29:21.214] (Bcore.sh:1083: Successful get rc frontend {{.spec.replicas}}: 2
I0225 22:29:21.324] (Breplicationcontroller/frontend scaled
W0225 22:29:21.425] error: Expected replicas to be 3, was 2
W0225 22:29:21.425] I0225 22:29:21.330248   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551133758-55", Name:"frontend", UID:"caa33f44-394c-11e9-bf9a-0242ac110002", APIVersion:"v1", ResourceVersion:"1323", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5v5mn
I0225 22:29:21.526] core.sh:1087: Successful get rc frontend {{.spec.replicas}}: 3
I0225 22:29:21.540] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 3
I0225 22:29:21.646] (Breplicationcontroller/frontend scaled
W0225 22:29:21.747] I0225 22:29:21.656146   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551133758-55", Name:"frontend", UID:"caa33f44-394c-11e9-bf9a-0242ac110002", APIVersion:"v1", ResourceVersion:"1328", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-5v5mn
I0225 22:29:21.848] core.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
... skipping 41 lines ...
I0225 22:29:24.222] service "expose-test-deployment" deleted
I0225 22:29:24.350] Successful
I0225 22:29:24.350] message:service/expose-test-deployment exposed
I0225 22:29:24.350] has:service/expose-test-deployment exposed
I0225 22:29:24.449] service "expose-test-deployment" deleted
I0225 22:29:24.560] Successful
I0225 22:29:24.561] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0225 22:29:24.561] See 'kubectl expose -h' for help and examples
I0225 22:29:24.561] has:invalid deployment: no selectors
I0225 22:29:24.660] Successful
I0225 22:29:24.660] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0225 22:29:24.661] See 'kubectl expose -h' for help and examples
I0225 22:29:24.661] has:invalid deployment: no selectors
I0225 22:29:24.844] deployment.apps/nginx-deployment created
W0225 22:29:24.945] I0225 22:29:24.851524   47432 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551133758-55", Name:"nginx-deployment", UID:"ce077b3b-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1446", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-64bb598779 to 3
W0225 22:29:24.946] I0225 22:29:24.855971   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133758-55", Name:"nginx-deployment-64bb598779", UID:"ce086d2f-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1447", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-lpmrv
W0225 22:29:24.946] I0225 22:29:24.860624   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133758-55", Name:"nginx-deployment-64bb598779", UID:"ce086d2f-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1447", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-fq7m6
... skipping 23 lines ...
I0225 22:29:27.108] service "frontend" deleted
I0225 22:29:27.118] service "frontend-2" deleted
I0225 22:29:27.128] service "frontend-3" deleted
I0225 22:29:27.141] service "frontend-4" deleted
I0225 22:29:27.151] service "frontend-5" deleted
I0225 22:29:27.267] Successful
I0225 22:29:27.267] message:error: cannot expose a Node
I0225 22:29:27.267] has:cannot expose
I0225 22:29:27.374] Successful
I0225 22:29:27.375] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0225 22:29:27.375] has:metadata.name: Invalid value
I0225 22:29:27.485] Successful
I0225 22:29:27.486] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0225 22:29:29.779] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0225 22:29:29.891] core.sh:1259: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0225 22:29:29.986] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0225 22:29:30.096] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0225 22:29:30.205] core.sh:1263: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0225 22:29:30.294] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0225 22:29:30.395] Error: required flag(s) "max" not set
W0225 22:29:30.395] 
W0225 22:29:30.395] 
W0225 22:29:30.395] Examples:
W0225 22:29:30.396]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0225 22:29:30.396]   kubectl autoscale deployment foo --min=2 --max=10
W0225 22:29:30.396]   
... skipping 54 lines ...
I0225 22:29:30.678]           limits:
I0225 22:29:30.679]             cpu: 300m
I0225 22:29:30.679]           requests:
I0225 22:29:30.679]             cpu: 300m
I0225 22:29:30.679]       terminationGracePeriodSeconds: 0
I0225 22:29:30.679] status: {}
W0225 22:29:30.780] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0225 22:29:30.959] deployment.apps/nginx-deployment-resources created
W0225 22:29:31.060] I0225 22:29:30.967447   47432 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551133758-55", Name:"nginx-deployment-resources", UID:"d1ac7cc3-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1586", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-695c766d58 to 3
W0225 22:29:31.061] I0225 22:29:30.972826   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133758-55", Name:"nginx-deployment-resources-695c766d58", UID:"d1ad9a78-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1587", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-mkhrb
W0225 22:29:31.061] I0225 22:29:30.977995   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133758-55", Name:"nginx-deployment-resources-695c766d58", UID:"d1ad9a78-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1587", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-kgtt7
W0225 22:29:31.062] I0225 22:29:30.982028   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133758-55", Name:"nginx-deployment-resources-695c766d58", UID:"d1ad9a78-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1587", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-5vtxd
I0225 22:29:31.162] core.sh:1278: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 2 lines ...
I0225 22:29:31.414] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0225 22:29:31.515] I0225 22:29:31.421755   47432 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551133758-55", Name:"nginx-deployment-resources", UID:"d1ac7cc3-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1600", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5b7fc6dd8b to 1
W0225 22:29:31.516] I0225 22:29:31.429237   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133758-55", Name:"nginx-deployment-resources-5b7fc6dd8b", UID:"d1f2ee52-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1601", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5b7fc6dd8b-f25zp
I0225 22:29:31.616] core.sh:1283: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0225 22:29:31.655] (Bcore.sh:1284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0225 22:29:31.886] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0225 22:29:31.987] error: unable to find container named redis
W0225 22:29:31.987] I0225 22:29:31.943997   47432 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551133758-55", Name:"nginx-deployment-resources", UID:"d1ac7cc3-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1610", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-695c766d58 to 2
W0225 22:29:31.988] I0225 22:29:31.955623   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133758-55", Name:"nginx-deployment-resources-695c766d58", UID:"d1ad9a78-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1614", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-695c766d58-mkhrb
W0225 22:29:31.988] I0225 22:29:31.968105   47432 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551133758-55", Name:"nginx-deployment-resources", UID:"d1ac7cc3-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1613", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6bc4567bf6 to 1
W0225 22:29:31.989] I0225 22:29:31.974006   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133758-55", Name:"nginx-deployment-resources-6bc4567bf6", UID:"d23b1720-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6bc4567bf6-m6pfb
I0225 22:29:32.089] core.sh:1289: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0225 22:29:32.156] (Bcore.sh:1290: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
... skipping 211 lines ...
I0225 22:29:32.753]     status: "True"
I0225 22:29:32.753]     type: Progressing
I0225 22:29:32.753]   observedGeneration: 4
I0225 22:29:32.753]   replicas: 4
I0225 22:29:32.753]   unavailableReplicas: 4
I0225 22:29:32.754]   updatedReplicas: 1
W0225 22:29:32.854] error: you must specify resources by --filename when --local is set.
W0225 22:29:32.854] Example resource specifications include:
W0225 22:29:32.855]    '-f rsrc.yaml'
W0225 22:29:32.855]    '--filename=rsrc.json'
I0225 22:29:32.956] core.sh:1299: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0225 22:29:33.032] (Bcore.sh:1300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0225 22:29:33.144] (Bcore.sh:1301: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0225 22:29:34.977]                 pod-template-hash=7875bf5c8b
I0225 22:29:34.978] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0225 22:29:34.978]                 deployment.kubernetes.io/max-replicas: 2
I0225 22:29:34.978]                 deployment.kubernetes.io/revision: 1
I0225 22:29:34.978] Controlled By:  Deployment/test-nginx-apps
I0225 22:29:34.978] Replicas:       1 current / 1 desired
I0225 22:29:34.978] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:34.978] Pod Template:
I0225 22:29:34.979]   Labels:  app=test-nginx-apps
I0225 22:29:34.979]            pod-template-hash=7875bf5c8b
I0225 22:29:34.979]   Containers:
I0225 22:29:34.979]    nginx:
I0225 22:29:34.979]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
I0225 22:29:40.343] (B    Image:	k8s.gcr.io/nginx:test-cmd
I0225 22:29:40.495] apps.sh:296: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0225 22:29:40.683] (Bdeployment.extensions/nginx rolled back
I0225 22:29:41.851] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0225 22:29:42.183] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0225 22:29:42.377] (Bdeployment.extensions/nginx rolled back
W0225 22:29:42.478] error: unable to find specified revision 1000000 in history
I0225 22:29:43.551] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0225 22:29:43.735] (Bdeployment.extensions/nginx paused
W0225 22:29:43.931] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0225 22:29:44.104] deployment.extensions/nginx resumed
I0225 22:29:44.340] deployment.extensions/nginx rolled back
I0225 22:29:44.670]     deployment.kubernetes.io/revision-history: 1,3
W0225 22:29:44.779] I0225 22:29:44.778371   47432 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1551133758-55
W0225 22:29:44.916] error: desired revision (3) is different from the running revision (5)
I0225 22:29:45.120] deployment.apps/nginx2 created
W0225 22:29:45.221] I0225 22:29:45.127010   47432 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551133773-8919", Name:"nginx2", UID:"da1d4a87-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1835", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-78cb9c866 to 3
W0225 22:29:45.222] I0225 22:29:45.135958   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133773-8919", Name:"nginx2-78cb9c866", UID:"da1e3d1b-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1836", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-78cb9c866-gf99t
W0225 22:29:45.222] I0225 22:29:45.142233   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133773-8919", Name:"nginx2-78cb9c866", UID:"da1e3d1b-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1836", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-78cb9c866-mqm99
W0225 22:29:45.222] I0225 22:29:45.143245   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133773-8919", Name:"nginx2-78cb9c866", UID:"da1e3d1b-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1836", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-78cb9c866-x52lx
I0225 22:29:45.323] deployment.extensions "nginx2" deleted
... skipping 10 lines ...
I0225 22:29:46.072] (Bdeployment.extensions/nginx-deployment image updated
W0225 22:29:46.173] I0225 22:29:46.078102   47432 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551133773-8919", Name:"nginx-deployment", UID:"da6e6e86-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1883", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5bfd55c857 to 1
W0225 22:29:46.173] I0225 22:29:46.083293   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133773-8919", Name:"nginx-deployment-5bfd55c857", UID:"daaf7eac-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1884", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5bfd55c857-5xhmk
I0225 22:29:46.274] apps.sh:337: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0225 22:29:46.280] (Bapps.sh:338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0225 22:29:46.496] (Bdeployment.extensions/nginx-deployment image updated
W0225 22:29:46.597] error: unable to find container named "redis"
I0225 22:29:46.698] apps.sh:343: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0225 22:29:46.710] (Bapps.sh:344: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0225 22:29:46.810] (Bdeployment.apps/nginx-deployment image updated
I0225 22:29:46.916] apps.sh:347: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0225 22:29:47.015] (Bapps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0225 22:29:47.191] (Bapps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
... skipping 66 lines ...
I0225 22:29:50.900] Context "test" modified.
I0225 22:29:50.907] +++ [0225 22:29:50] Testing kubectl(v1:replicasets)
I0225 22:29:51.001] apps.sh:502: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:29:51.181] (Breplicaset.apps/frontend created
I0225 22:29:51.196] +++ [0225 22:29:51] Deleting rs
I0225 22:29:51.283] replicaset.extensions "frontend" deleted
W0225 22:29:51.383] E0225 22:29:50.773455   47432 replica_set.go:450] Sync "namespace-1551133773-8919/nginx-deployment-58dbcd7c7f" failed with replicasets.apps "nginx-deployment-58dbcd7c7f" not found
W0225 22:29:51.384] E0225 22:29:50.823175   47432 replica_set.go:450] Sync "namespace-1551133773-8919/nginx-deployment-5cc58864fb" failed with replicasets.apps "nginx-deployment-5cc58864fb" not found
W0225 22:29:51.384] I0225 22:29:51.189826   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133790-2315", Name:"frontend", UID:"ddba3c25-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2073", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-v7sjp
W0225 22:29:51.385] I0225 22:29:51.195094   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133790-2315", Name:"frontend", UID:"ddba3c25-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2073", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-w7gdf
W0225 22:29:51.385] I0225 22:29:51.195414   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133790-2315", Name:"frontend", UID:"ddba3c25-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2073", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kb5x4
I0225 22:29:51.485] apps.sh:508: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:29:51.486] (Bapps.sh:512: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:29:51.662] (Breplicaset.apps/frontend-no-cascade created
W0225 22:29:51.763] I0225 22:29:51.668757   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133790-2315", Name:"frontend-no-cascade", UID:"de039d5e-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-2kmbs
W0225 22:29:51.764] I0225 22:29:51.673914   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133790-2315", Name:"frontend-no-cascade", UID:"de039d5e-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-mzzjg
W0225 22:29:51.764] I0225 22:29:51.676017   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551133790-2315", Name:"frontend-no-cascade", UID:"de039d5e-394c-11e9-bf9a-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-ktqxp
I0225 22:29:51.865] apps.sh:518: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0225 22:29:51.865] (B+++ [0225 22:29:51] Deleting rs
I0225 22:29:51.866] replicaset.extensions "frontend-no-cascade" deleted
W0225 22:29:51.967] E0225 22:29:51.885968   47432 replica_set.go:450] Sync "namespace-1551133790-2315/frontend-no-cascade" failed with Operation cannot be fulfilled on replicasets.apps "frontend-no-cascade": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1551133790-2315/frontend-no-cascade, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: de039d5e-394c-11e9-bf9a-0242ac110002, UID in object meta: 
I0225 22:29:52.067] apps.sh:522: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:29:52.095] (Bapps.sh:524: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0225 22:29:52.192] (Bpod "frontend-no-cascade-2kmbs" deleted
I0225 22:29:52.199] pod "frontend-no-cascade-ktqxp" deleted
I0225 22:29:52.207] pod "frontend-no-cascade-mzzjg" deleted
I0225 22:29:52.318] apps.sh:527: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 8 lines ...
I0225 22:29:52.869] Namespace:    namespace-1551133790-2315
I0225 22:29:52.869] Selector:     app=guestbook,tier=frontend
I0225 22:29:52.869] Labels:       app=guestbook
I0225 22:29:52.869]               tier=frontend
I0225 22:29:52.869] Annotations:  <none>
I0225 22:29:52.869] Replicas:     3 current / 3 desired
I0225 22:29:52.869] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:52.869] Pod Template:
I0225 22:29:52.870]   Labels:  app=guestbook
I0225 22:29:52.870]            tier=frontend
I0225 22:29:52.870]   Containers:
I0225 22:29:52.870]    php-redis:
I0225 22:29:52.870]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0225 22:29:52.997] Namespace:    namespace-1551133790-2315
I0225 22:29:52.997] Selector:     app=guestbook,tier=frontend
I0225 22:29:52.997] Labels:       app=guestbook
I0225 22:29:52.997]               tier=frontend
I0225 22:29:52.998] Annotations:  <none>
I0225 22:29:52.998] Replicas:     3 current / 3 desired
I0225 22:29:52.998] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:52.998] Pod Template:
I0225 22:29:52.998]   Labels:  app=guestbook
I0225 22:29:52.998]            tier=frontend
I0225 22:29:52.999]   Containers:
I0225 22:29:52.999]    php-redis:
I0225 22:29:52.999]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0225 22:29:53.123] Namespace:    namespace-1551133790-2315
I0225 22:29:53.123] Selector:     app=guestbook,tier=frontend
I0225 22:29:53.123] Labels:       app=guestbook
I0225 22:29:53.123]               tier=frontend
I0225 22:29:53.123] Annotations:  <none>
I0225 22:29:53.123] Replicas:     3 current / 3 desired
I0225 22:29:53.124] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:53.124] Pod Template:
I0225 22:29:53.124]   Labels:  app=guestbook
I0225 22:29:53.124]            tier=frontend
I0225 22:29:53.124]   Containers:
I0225 22:29:53.124]    php-redis:
I0225 22:29:53.125]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 13 lines ...
I0225 22:29:53.328] Namespace:    namespace-1551133790-2315
I0225 22:29:53.328] Selector:     app=guestbook,tier=frontend
I0225 22:29:53.328] Labels:       app=guestbook
I0225 22:29:53.328]               tier=frontend
I0225 22:29:53.328] Annotations:  <none>
I0225 22:29:53.329] Replicas:     3 current / 3 desired
I0225 22:29:53.329] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:53.329] Pod Template:
I0225 22:29:53.329]   Labels:  app=guestbook
I0225 22:29:53.329]            tier=frontend
I0225 22:29:53.329]   Containers:
I0225 22:29:53.330]    php-redis:
I0225 22:29:53.330]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0225 22:29:53.421] Namespace:    namespace-1551133790-2315
I0225 22:29:53.421] Selector:     app=guestbook,tier=frontend
I0225 22:29:53.422] Labels:       app=guestbook
I0225 22:29:53.422]               tier=frontend
I0225 22:29:53.422] Annotations:  <none>
I0225 22:29:53.422] Replicas:     3 current / 3 desired
I0225 22:29:53.422] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:53.422] Pod Template:
I0225 22:29:53.422]   Labels:  app=guestbook
I0225 22:29:53.423]            tier=frontend
I0225 22:29:53.423]   Containers:
I0225 22:29:53.423]    php-redis:
I0225 22:29:53.423]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0225 22:29:53.549] Namespace:    namespace-1551133790-2315
I0225 22:29:53.549] Selector:     app=guestbook,tier=frontend
I0225 22:29:53.549] Labels:       app=guestbook
I0225 22:29:53.549]               tier=frontend
I0225 22:29:53.550] Annotations:  <none>
I0225 22:29:53.550] Replicas:     3 current / 3 desired
I0225 22:29:53.550] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:53.550] Pod Template:
I0225 22:29:53.550]   Labels:  app=guestbook
I0225 22:29:53.551]            tier=frontend
I0225 22:29:53.551]   Containers:
I0225 22:29:53.551]    php-redis:
I0225 22:29:53.551]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0225 22:29:53.678] Namespace:    namespace-1551133790-2315
I0225 22:29:53.679] Selector:     app=guestbook,tier=frontend
I0225 22:29:53.679] Labels:       app=guestbook
I0225 22:29:53.679]               tier=frontend
I0225 22:29:53.679] Annotations:  <none>
I0225 22:29:53.679] Replicas:     3 current / 3 desired
I0225 22:29:53.679] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:53.679] Pod Template:
I0225 22:29:53.679]   Labels:  app=guestbook
I0225 22:29:53.680]            tier=frontend
I0225 22:29:53.680]   Containers:
I0225 22:29:53.680]    php-redis:
I0225 22:29:53.680]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0225 22:29:53.805] Namespace:    namespace-1551133790-2315
I0225 22:29:53.805] Selector:     app=guestbook,tier=frontend
I0225 22:29:53.805] Labels:       app=guestbook
I0225 22:29:53.805]               tier=frontend
I0225 22:29:53.805] Annotations:  <none>
I0225 22:29:53.805] Replicas:     3 current / 3 desired
I0225 22:29:53.806] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:29:53.806] Pod Template:
I0225 22:29:53.806]   Labels:  app=guestbook
I0225 22:29:53.806]            tier=frontend
I0225 22:29:53.806]   Containers:
I0225 22:29:53.806]    php-redis:
I0225 22:29:53.806]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 183 lines ...
I0225 22:30:00.190] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0225 22:30:00.313] apps.sh:643: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0225 22:30:00.413] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0225 22:30:00.543] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0225 22:30:00.661] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0225 22:30:00.756] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0225 22:30:00.857] Error: required flag(s) "max" not set
W0225 22:30:00.857] 
W0225 22:30:00.857] 
W0225 22:30:00.858] Examples:
W0225 22:30:00.858]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0225 22:30:00.858]   kubectl autoscale deployment foo --min=2 --max=10
W0225 22:30:00.858]   
... skipping 88 lines ...
I0225 22:30:04.466] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0225 22:30:04.578] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0225 22:30:04.705] (Bstatefulset.apps/nginx rolled back
I0225 22:30:04.824] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0225 22:30:04.938] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0225 22:30:05.066] (BSuccessful
I0225 22:30:05.067] message:error: unable to find specified revision 1000000 in history
I0225 22:30:05.067] has:unable to find specified revision
I0225 22:30:05.180] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0225 22:30:05.293] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0225 22:30:05.423] (Bstatefulset.apps/nginx rolled back
I0225 22:30:05.543] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0225 22:30:05.655] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0225 22:30:07.878] Name:         mock
I0225 22:30:07.878] Namespace:    namespace-1551133806-3656
I0225 22:30:07.878] Selector:     app=mock
I0225 22:30:07.878] Labels:       app=mock
I0225 22:30:07.878] Annotations:  <none>
I0225 22:30:07.878] Replicas:     1 current / 1 desired
I0225 22:30:07.879] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0225 22:30:07.879] Pod Template:
I0225 22:30:07.879]   Labels:  app=mock
I0225 22:30:07.879]   Containers:
I0225 22:30:07.879]    mock-container:
I0225 22:30:07.879]     Image:        k8s.gcr.io/pause:2.0
I0225 22:30:07.879]     Port:         9949/TCP
... skipping 56 lines ...
I0225 22:30:10.496] Name:         mock
I0225 22:30:10.496] Namespace:    namespace-1551133806-3656
I0225 22:30:10.496] Selector:     app=mock
I0225 22:30:10.496] Labels:       app=mock
I0225 22:30:10.496] Annotations:  <none>
I0225 22:30:10.496] Replicas:     1 current / 1 desired
I0225 22:30:10.496] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0225 22:30:10.496] Pod Template:
I0225 22:30:10.496]   Labels:  app=mock
I0225 22:30:10.497]   Containers:
I0225 22:30:10.497]    mock-container:
I0225 22:30:10.497]     Image:        k8s.gcr.io/pause:2.0
I0225 22:30:10.497]     Port:         9949/TCP
... skipping 56 lines ...
I0225 22:30:13.115] Name:         mock
I0225 22:30:13.115] Namespace:    namespace-1551133806-3656
I0225 22:30:13.116] Selector:     app=mock
I0225 22:30:13.116] Labels:       app=mock
I0225 22:30:13.116] Annotations:  <none>
I0225 22:30:13.116] Replicas:     1 current / 1 desired
I0225 22:30:13.116] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0225 22:30:13.116] Pod Template:
I0225 22:30:13.116]   Labels:  app=mock
I0225 22:30:13.116]   Containers:
I0225 22:30:13.117]    mock-container:
I0225 22:30:13.117]     Image:        k8s.gcr.io/pause:2.0
I0225 22:30:13.117]     Port:         9949/TCP
... skipping 43 lines ...
I0225 22:30:15.654] Namespace:    namespace-1551133806-3656
I0225 22:30:15.654] Selector:     app=mock
I0225 22:30:15.654] Labels:       app=mock
I0225 22:30:15.654]               status=replaced
I0225 22:30:15.654] Annotations:  <none>
I0225 22:30:15.654] Replicas:     1 current / 1 desired
I0225 22:30:15.655] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0225 22:30:15.655] Pod Template:
I0225 22:30:15.655]   Labels:  app=mock
I0225 22:30:15.655]   Containers:
I0225 22:30:15.655]    mock-container:
I0225 22:30:15.655]     Image:        k8s.gcr.io/pause:2.0
I0225 22:30:15.655]     Port:         9949/TCP
... skipping 11 lines ...
I0225 22:30:15.668] Namespace:    namespace-1551133806-3656
I0225 22:30:15.668] Selector:     app=mock2
I0225 22:30:15.668] Labels:       app=mock2
I0225 22:30:15.668]               status=replaced
I0225 22:30:15.668] Annotations:  <none>
I0225 22:30:15.668] Replicas:     1 current / 1 desired
I0225 22:30:15.669] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0225 22:30:15.669] Pod Template:
I0225 22:30:15.669]   Labels:  app=mock2
I0225 22:30:15.669]   Containers:
I0225 22:30:15.669]    mock-container:
I0225 22:30:15.669]     Image:        k8s.gcr.io/pause:2.0
I0225 22:30:15.669]     Port:         9949/TCP
... skipping 104 lines ...
I0225 22:30:21.287] +++ [0225 22:30:21] Creating namespace namespace-1551133821-19275
I0225 22:30:21.372] namespace/namespace-1551133821-19275 created
I0225 22:30:21.458] Context "test" modified.
I0225 22:30:21.467] +++ [0225 22:30:21] Testing persistent volumes
I0225 22:30:21.576] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:30:21.773] (Bpersistentvolume/pv0001 created
W0225 22:30:21.874] E0225 22:30:21.782573   47432 pv_protection_controller.go:116] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I0225 22:30:21.975] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0225 22:30:21.984] (Bpersistentvolume "pv0001" deleted
I0225 22:30:22.177] persistentvolume/pv0002 created
I0225 22:30:22.300] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0225 22:30:22.395] (Bpersistentvolume "pv0002" deleted
I0225 22:30:22.598] persistentvolume/pv0003 created
... skipping 490 lines ...
I0225 22:30:28.538] yes
I0225 22:30:28.538] has:the server doesn't have a resource type
I0225 22:30:28.626] Successful
I0225 22:30:28.627] message:yes
I0225 22:30:28.627] has:yes
I0225 22:30:28.713] Successful
I0225 22:30:28.713] message:error: --subresource can not be used with NonResourceURL
I0225 22:30:28.713] has:subresource can not be used with NonResourceURL
I0225 22:30:28.807] Successful
I0225 22:30:28.904] Successful
I0225 22:30:28.905] message:yes
I0225 22:30:28.905] 0
I0225 22:30:28.905] has:0
... skipping 18 lines ...
I0225 22:30:29.249] role.rbac.authorization.k8s.io/testing-R reconciled
I0225 22:30:29.264] legacy-script.sh:763: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0225 22:30:29.369] (Blegacy-script.sh:764: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0225 22:30:29.480] (Blegacy-script.sh:765: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0225 22:30:29.593] (Blegacy-script.sh:766: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0225 22:30:29.687] (BSuccessful
I0225 22:30:29.687] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0225 22:30:29.687] has:only rbac.authorization.k8s.io/v1 is supported
I0225 22:30:29.792] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0225 22:30:29.800] role.rbac.authorization.k8s.io "testing-R" deleted
I0225 22:30:29.814] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0225 22:30:29.825] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0225 22:30:29.840] Recording: run_retrieve_multiple_tests
... skipping 32 lines ...
I0225 22:30:31.236] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0225 22:30:31.239] +++ working dir: /go/src/k8s.io/kubernetes
I0225 22:30:31.243] +++ command: run_kubectl_explain_tests
I0225 22:30:31.255] +++ [0225 22:30:31] Testing kubectl(v1:explain)
W0225 22:30:31.356] I0225 22:30:31.067854   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551133830-9166", Name:"cassandra", UID:"f53739a2-394c-11e9-bf9a-0242ac110002", APIVersion:"v1", ResourceVersion:"2656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-vk67r
W0225 22:30:31.357] I0225 22:30:31.084882   47432 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551133830-9166", Name:"cassandra", UID:"f53739a2-394c-11e9-bf9a-0242ac110002", APIVersion:"v1", ResourceVersion:"2656", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-2v5kl
W0225 22:30:31.357] E0225 22:30:31.090971   47432 replica_set.go:450] Sync "namespace-1551133830-9166/cassandra" failed with replicationcontrollers "cassandra" not found
I0225 22:30:31.457] KIND:     Pod
I0225 22:30:31.458] VERSION:  v1
I0225 22:30:31.458] 
I0225 22:30:31.458] DESCRIPTION:
I0225 22:30:31.458]      Pod is a collection of containers that can run on a host. This resource is
I0225 22:30:31.458]      created by clients and scheduled onto hosts.
... skipping 1109 lines ...
I0225 22:31:00.759] message:node/127.0.0.1 already uncordoned (dry run)
I0225 22:31:00.759] has:already uncordoned
I0225 22:31:00.870] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0225 22:31:00.965] (Bnode/127.0.0.1 labeled
I0225 22:31:01.080] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0225 22:31:01.165] (BSuccessful
I0225 22:31:01.165] message:error: cannot specify both a node name and a --selector option
I0225 22:31:01.165] See 'kubectl drain -h' for help and examples
I0225 22:31:01.166] has:cannot specify both a node name
I0225 22:31:01.245] Successful
I0225 22:31:01.246] message:error: USAGE: cordon NODE [flags]
I0225 22:31:01.246] See 'kubectl cordon -h' for help and examples
I0225 22:31:01.246] has:error\: USAGE\: cordon NODE
I0225 22:31:01.337] node/127.0.0.1 already uncordoned
I0225 22:31:01.428] Successful
I0225 22:31:01.429] message:error: You must provide one or more resources by argument or filename.
I0225 22:31:01.429] Example resource specifications include:
I0225 22:31:01.429]    '-f rsrc.yaml'
I0225 22:31:01.429]    '--filename=rsrc.json'
I0225 22:31:01.429]    '<resource> <name>'
I0225 22:31:01.429]    '<resource>'
I0225 22:31:01.429] has:must provide one or more resources
... skipping 15 lines ...
I0225 22:31:01.996] Successful
I0225 22:31:01.996] message:The following compatible plugins are available:
I0225 22:31:01.996] 
I0225 22:31:01.996] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0225 22:31:01.997]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0225 22:31:01.997] 
I0225 22:31:01.997] error: one plugin warning was found
I0225 22:31:01.997] has:kubectl-version overwrites existing command: "kubectl version"
I0225 22:31:02.081] Successful
I0225 22:31:02.081] message:The following compatible plugins are available:
I0225 22:31:02.081] 
I0225 22:31:02.082] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0225 22:31:02.082] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0225 22:31:02.082]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0225 22:31:02.082] 
I0225 22:31:02.082] error: one plugin warning was found
I0225 22:31:02.082] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0225 22:31:02.172] Successful
I0225 22:31:02.172] message:The following compatible plugins are available:
I0225 22:31:02.172] 
I0225 22:31:02.172] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0225 22:31:02.172] has:plugins are available
I0225 22:31:02.258] Successful
I0225 22:31:02.258] message:
I0225 22:31:02.259] error: unable to find any kubectl plugins in your PATH
I0225 22:31:02.259] has:unable to find any kubectl plugins in your PATH
I0225 22:31:02.339] Successful
I0225 22:31:02.340] message:I am plugin foo
I0225 22:31:02.340] has:plugin foo
I0225 22:31:02.423] Successful
I0225 22:31:02.424] message:Client Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.358+54af3a65e2ba4f", GitCommit:"54af3a65e2ba4ff9272a52c1f5316a11945d81e5", GitTreeState:"clean", BuildDate:"2019-02-25T22:23:25Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0225 22:31:02.534] 
I0225 22:31:02.537] +++ Running case: test-cmd.run_impersonation_tests 
I0225 22:31:02.540] +++ working dir: /go/src/k8s.io/kubernetes
I0225 22:31:02.543] +++ command: run_impersonation_tests
I0225 22:31:02.556] +++ [0225 22:31:02] Testing impersonation
I0225 22:31:02.641] Successful
I0225 22:31:02.641] message:error: requesting groups or user-extra for  without impersonating a user
I0225 22:31:02.641] has:without impersonating a user
I0225 22:31:02.828] certificatesigningrequest.certificates.k8s.io/foo created
I0225 22:31:02.943] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0225 22:31:03.051] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0225 22:31:03.147] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0225 22:31:03.348] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 75 lines ...
W0225 22:31:06.712] I0225 22:31:06.702819   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.712] I0225 22:31:06.706815   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.712] I0225 22:31:06.702925   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.712] I0225 22:31:06.706827   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.712] I0225 22:31:06.703129   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.713] I0225 22:31:06.706840   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.713] W0225 22:31:06.703194   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.713] W0225 22:31:06.703219   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.713] W0225 22:31:06.703364   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.714] W0225 22:31:06.703466   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.714] W0225 22:31:06.703537   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.714] W0225 22:31:06.704019   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.714] I0225 22:31:06.704024   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.714] I0225 22:31:06.706911   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.714] I0225 22:31:06.704057   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.715] I0225 22:31:06.704049   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.715] I0225 22:31:06.704082   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.715] I0225 22:31:06.704104   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 15 lines ...
W0225 22:31:06.717] I0225 22:31:06.704181   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.718] I0225 22:31:06.704202   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.718] I0225 22:31:06.704205   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.718] I0225 22:31:06.704207   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.718] I0225 22:31:06.704222   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.718] I0225 22:31:06.704233   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.718] W0225 22:31:06.704271   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.719] W0225 22:31:06.704278   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.719] I0225 22:31:06.704346   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.719] I0225 22:31:06.704441   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.719] W0225 22:31:06.704615   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.719] W0225 22:31:06.704633   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.720] W0225 22:31:06.704644   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.720] W0225 22:31:06.704943   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.720] I0225 22:31:06.705011   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.720] I0225 22:31:06.705035   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.720] I0225 22:31:06.705060   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.720] I0225 22:31:06.705070   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.721] I0225 22:31:06.705097   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.721] I0225 22:31:06.705122   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.721] W0225 22:31:06.705131   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.721] I0225 22:31:06.705145   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.721] W0225 22:31:06.705168   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.722] W0225 22:31:06.705191   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.722] I0225 22:31:06.705233   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.722] I0225 22:31:06.705266   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.722] I0225 22:31:06.705294   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.722] I0225 22:31:06.705340   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.722] I0225 22:31:06.705366   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.722] I0225 22:31:06.705403   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 12 lines ...
W0225 22:31:06.724] I0225 22:31:06.705861   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.724] I0225 22:31:06.705886   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.725] I0225 22:31:06.705889   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.725] I0225 22:31:06.705917   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.725] I0225 22:31:06.705931   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.725] I0225 22:31:06.705954   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.725] W0225 22:31:06.706002   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.726] W0225 22:31:06.706004   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.726] W0225 22:31:06.706012   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.726] W0225 22:31:06.705998   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.726] W0225 22:31:06.706028   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.726] W0225 22:31:06.706033   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.727] W0225 22:31:06.706047   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.727] W0225 22:31:06.706064   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.727] I0225 22:31:06.706066   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.727] W0225 22:31:06.706068   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.728] W0225 22:31:06.706079   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.728] W0225 22:31:06.706084   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.728] W0225 22:31:06.706084   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.728] W0225 22:31:06.706101   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.728] W0225 22:31:06.706108   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.729] W0225 22:31:06.706108   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.729] W0225 22:31:06.706120   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.729] W0225 22:31:06.706128   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.729] W0225 22:31:06.706138   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.730] W0225 22:31:06.706138   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.730] W0225 22:31:06.706149   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.730] W0225 22:31:06.706154   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.730] W0225 22:31:06.706162   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.731] W0225 22:31:06.706161   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.731] W0225 22:31:06.706163   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.731] E0225 22:31:06.706172   44054 controller.go:172] Get https://127.0.0.1:6443/api/v1/namespaces/default/endpoints/kubernetes: dial tcp 127.0.0.1:6443: connect: connection refused
W0225 22:31:06.731] W0225 22:31:06.706184   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.731] W0225 22:31:06.706187   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.732] W0225 22:31:06.706193   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.732] W0225 22:31:06.706196   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.732] W0225 22:31:06.706205   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.732] W0225 22:31:06.706216   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.733] W0225 22:31:06.706219   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.733] W0225 22:31:06.706229   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.733] W0225 22:31:06.706232   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.733] W0225 22:31:06.706239   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.733] W0225 22:31:06.706245   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.734] W0225 22:31:06.706252   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.734] W0225 22:31:06.706259   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.734] W0225 22:31:06.706266   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.734] W0225 22:31:06.706268   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.735] W0225 22:31:06.706271   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.735] W0225 22:31:06.706286   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.735] W0225 22:31:06.706292   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.735] W0225 22:31:06.706315   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.735] W0225 22:31:06.706316   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.736] W0225 22:31:06.706322   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.736] W0225 22:31:06.706321   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.736] W0225 22:31:06.706347   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.736] W0225 22:31:06.706342   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.737] W0225 22:31:06.706359   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.737] W0225 22:31:06.706361   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.737] W0225 22:31:06.706367   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.737] W0225 22:31:06.706375   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.737] W0225 22:31:06.706390   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.738] W0225 22:31:06.706389   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.738] W0225 22:31:06.706405   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.738] W0225 22:31:06.706422   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.738] W0225 22:31:06.706439   44054 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:31:06.739] I0225 22:31:06.706579   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.739] I0225 22:31:06.702206   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.739] I0225 22:31:06.706968   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.739] I0225 22:31:06.706988   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.739] I0225 22:31:06.704157   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.739] I0225 22:31:06.707295   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 34 lines ...
W0225 22:31:06.745] I0225 22:31:06.708090   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.745] I0225 22:31:06.708106   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.745] I0225 22:31:06.708117   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.745] I0225 22:31:06.708141   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.745] I0225 22:31:06.708165   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.745] I0225 22:31:06.708319   44054 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:31:06.773] make: *** [test-cmd] Error 1
I0225 22:31:06.874] No resources found
I0225 22:31:06.874] No resources found
I0225 22:31:06.874] FAILED TESTS: run_kubectl_run_tests, 
I0225 22:31:06.875] junit report dir: /workspace/artifacts
I0225 22:31:06.875] +++ [0225 22:31:06] Clean up complete
I0225 22:31:06.875] Makefile:294: recipe for target 'test-cmd' failed
W0225 22:31:08.983] Traceback (most recent call last):
W0225 22:31:08.983]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0225 22:31:08.983]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0225 22:31:08.983]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0225 22:31:08.984]     check(*cmd)
W0225 22:31:08.984]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0225 22:31:08.984]     subprocess.check_call(cmd)
W0225 22:31:08.984]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0225 22:31:09.005]     raise CalledProcessError(retcode, cmd)
W0225 22:31:09.006] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20190125-cc5d6ecff3', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0225 22:31:09.012] Command failed
I0225 22:31:09.013] process 671 exited with code 1 after 12.5m
E0225 22:31:09.013] FAIL: pull-kubernetes-integration
I0225 22:31:09.013] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0225 22:31:09.544] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0225 22:31:09.602] process 85200 exited with code 0 after 0.0m
I0225 22:31:09.602] Call:  gcloud config get-value account
I0225 22:31:09.951] process 85212 exited with code 0 after 0.0m
I0225 22:31:09.952] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0225 22:31:09.952] Upload result and artifacts...
I0225 22:31:09.952] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73650/pull-kubernetes-integration/46313
I0225 22:31:09.953] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73650/pull-kubernetes-integration/46313/artifacts
W0225 22:31:11.216] CommandException: One or more URLs matched no objects.
E0225 22:31:11.384] Command failed
I0225 22:31:11.385] process 85224 exited with code 1 after 0.0m
W0225 22:31:11.385] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73650/pull-kubernetes-integration/46313/artifacts not exist yet
I0225 22:31:11.385] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73650/pull-kubernetes-integration/46313/artifacts
I0225 22:31:14.307] process 85366 exited with code 0 after 0.0m
W0225 22:31:14.308] metadata path /workspace/_artifacts/metadata.json does not exist
W0225 22:31:14.308] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...