This job view page is being replaced by Spyglass soon. Check out the new job view.
PRbrianpursley: WIP:Attempt to fix pv cleanup flake
ResultFAILURE
Tests 1 failed / 118 succeeded
Started2020-05-22 18:10
Elapsed19m23s
Revision2d0b108897f5d4a742d4efa5b8ba514b7afee644
Refs 91169
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/f527a823-8ebb-492d-8a92-beebae6bff68/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/f527a823-8ebb-492d-8a92-beebae6bff68/targets/test

Test Failures


test-cmd run_crd_tests 1m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=test\-cmd\srun\_crd\_tests$'
/home/prow/go/src/k8s.io/kubernetes/hack/lib/test.sh: line 270: 74977 Killed                  while [ ${tries} -lt 10 ]; do
    tries=$((tries+1)); kubectl "${kube_flags[@]}" patch bars/test -p "{\"patched\":\"${tries}\"}" --type=merge; sleep 1;
done
/home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh: line 294: 74976 Killed                  kubectl "${kube_flags[@]}" get bars --request-timeout=1m --watch-only -o name
!!! [0522 18:25:54] Call tree:
!!! [0522 18:25:54]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh:458 kube::test::wait_object_assert(...)
!!! [0522 18:25:54]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/crd.sh:133 run_non_native_resource_tests(...)
!!! [0522 18:25:54]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 run_crd_tests(...)
!!! [0522 18:25:54]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0522 18:25:54]  5: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:130 juLog(...)
!!! [0522 18:25:54]  6: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:584 record_command(...)
!!! [0522 18:25:54]  7: hack/make-rules/test-cmd.sh:150 runTests(...)
				
				Click to see stdout/stderrfrom junit_test-cmd.xml

Filter through log files | View test history on testgrid


Show 118 Passed Tests

Error lines from build-log.txt

... skipping 85 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 154: bogus-expected-to-fail: command not found
!!! [0522 18:15:31] Call tree:
!!! [0522 18:15:31]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0522 18:15:31]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0522 18:15:31]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:130 juLog(...)
!!! [0522 18:15:31]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:158 record_command(...)
!!! [0522 18:15:31]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0522 18:15:32] Running kubeadm tests
+++ [0522 18:15:39] Building go targets for linux/amd64:
    cmd/kubeadm
+++ [0522 18:16:36] Running tests without code coverage
{"Time":"2020-05-22T18:18:17.313358808Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t57.261s\n"}
✓  cmd/kubeadm/test/cmd (57.264s)
... skipping 314 lines ...
I0522 18:20:57.977344   53868 client.go:360] parsed scheme: "passthrough"
I0522 18:20:57.977400   53868 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0522 18:20:57.977410   53868 clientconn.go:933] ClientConn switching balancer to "pick_first"
+++ [0522 18:21:07] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0522 18:21:07.842327   57453 serving.go:331] Generated self-signed cert in-memory
W0522 18:21:08.833534   57453 authentication.go:368] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0522 18:21:08.833873   57453 authentication.go:265] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0522 18:21:08.833888   57453 authentication.go:289] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0522 18:21:08.833937   57453 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0522 18:21:08.833968   57453 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0522 18:21:08.833994   57453 controllermanager.go:160] Version: v1.19.0-beta.0.135+d09b14154d76ed
I0522 18:21:08.836218   57453 secure_serving.go:187] Serving securely on [::]:10257
I0522 18:21:08.836313   57453 tlsconfig.go:240] Starting DynamicServingCertificateController
I0522 18:21:08.836994   57453 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0522 18:21:08.837053   57453 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
... skipping 95 lines ...
I0522 18:21:09.645653   57453 graph_builder.go:282] GraphBuilder running
I0522 18:21:09.645654   57453 controllermanager.go:532] Started "garbagecollector"
I0522 18:21:09.645987   57453 controllermanager.go:532] Started "ttl"
W0522 18:21:09.645997   57453 controllermanager.go:511] "tokencleaner" is disabled
I0522 18:21:09.646211   57453 ttl_controller.go:118] Starting TTL controller
I0522 18:21:09.646232   57453 shared_informer.go:240] Waiting for caches to sync for TTL
E0522 18:21:09.646343   57453 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0522 18:21:09.646367   57453 controllermanager.go:524] Skipping "service"
I0522 18:21:09.646814   57453 controllermanager.go:532] Started "endpoint"
I0522 18:21:09.647338   57453 endpoints_controller.go:182] Starting endpoint controller
I0522 18:21:09.647367   57453 shared_informer.go:240] Waiting for caches to sync for endpoint
I0522 18:21:09.647632   57453 controllermanager.go:532] Started "deployment"
W0522 18:21:09.647652   57453 controllermanager.go:511] "bootstrapsigner" is disabled
... skipping 40 lines ...
I0522 18:21:09.907919   57453 controllermanager.go:532] Started "csrapproving"
I0522 18:21:09.907959   57453 certificate_controller.go:119] Starting certificate controller "csrapproving"
I0522 18:21:09.907935   57453 disruption.go:331] Starting disruption controller
I0522 18:21:09.907989   57453 shared_informer.go:240] Waiting for caches to sync for certificate-csrapproving
I0522 18:21:09.907999   57453 shared_informer.go:240] Waiting for caches to sync for disruption
I0522 18:21:09.910641   57453 node_lifecycle_controller.go:77] Sending events to api server
E0522 18:21:09.910684   57453 core.go:230] failed to start cloud node lifecycle controller: no cloud provider provided
W0522 18:21:09.910695   57453 controllermanager.go:524] Skipping "cloud-node-lifecycle"
W0522 18:21:09.911246   57453 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0522 18:21:09.911947   57453 controllermanager.go:532] Started "attachdetach"
I0522 18:21:09.912031   57453 attach_detach_controller.go:338] Starting attach detach controller
I0522 18:21:09.912053   57453 shared_informer.go:240] Waiting for caches to sync for attach detach
I0522 18:21:09.912400   57453 controllermanager.go:532] Started "persistentvolume-expander"
... skipping 6 lines ...
I0522 18:21:09.914127   57453 controllermanager.go:532] Started "serviceaccount"
I0522 18:21:09.914727   57453 controllermanager.go:532] Started "cronjob"
W0522 18:21:09.914763   57453 controllermanager.go:524] Skipping "csrsigning"
I0522 18:21:09.918522   57453 serviceaccounts_controller.go:117] Starting service account controller
I0522 18:21:09.918546   57453 shared_informer.go:240] Waiting for caches to sync for service account
I0522 18:21:09.918594   57453 cronjob_controller.go:96] Starting CronJob Manager
W0522 18:21:09.979883   57453 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0522 18:21:10.018711   57453 shared_informer.go:247] Caches are synced for service account 
I0522 18:21:10.022716   53868 controller.go:606] quota admission added evaluator for: serviceaccounts
I0522 18:21:10.039513   57453 shared_informer.go:247] Caches are synced for namespace 
I0522 18:21:10.046416   57453 shared_informer.go:247] Caches are synced for TTL 
I0522 18:21:10.053677   57453 shared_informer.go:247] Caches are synced for PV protection 
I0522 18:21:10.118488   57453 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
E0522 18:21:10.130105   57453 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0522 18:21:10.208551   57453 shared_informer.go:247] Caches are synced for disruption 
I0522 18:21:10.208592   57453 disruption.go:339] Sending events to api server.
I0522 18:21:10.212313   57453 shared_informer.go:247] Caches are synced for attach detach 
I0522 18:21:10.212757   57453 shared_informer.go:247] Caches are synced for expand 
I0522 18:21:10.213467   57453 shared_informer.go:247] Caches are synced for job 
I0522 18:21:10.213664   57453 shared_informer.go:247] Caches are synced for ReplicationController 
... skipping 128 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0522 18:21:17] Creating namespace namespace-1590171677-2126
namespace/namespace-1590171677-2126 created
Context "test" modified.
+++ [0522 18:21:17] Testing RESTMapper
+++ [0522 18:21:18] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 58 lines ...
namespace/namespace-1590171684-15201 created
Context "test" modified.
+++ [0522 18:21:24] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 61 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 25 lines ...
namespace/namespace-1590171697-6651 created
Context "test" modified.
+++ [0522 18:21:37] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:155: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:156: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:157: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 462 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
core.sh:192: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:204: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:208: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:212: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:217: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 19 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:261: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:267: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:271: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:277: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 224 lines ...
core.sh:536: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.2:
(BSuccessful
message:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:556: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0522 18:22:20] "kubectl patch with resourceVersion 572" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:580: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:kubectl-create kubectl-patch kubectl-replace
has:kubectl-replace
Successful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0522 18:22:21.929366   57453 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
core.sh:608: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:633: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
core.sh:649: Successful get node node-v1-test {{.metadata.annotations.a}}: b
... skipping 29 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:685: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:689: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:693: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:697: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:701: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 83 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0522 18:22:38] Creating namespace namespace-1590171758-29639
namespace/namespace-1590171758-29639 created
Context "test" modified.
+++ [0522 18:22:38] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 42 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0522 18:22:39] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 36 lines ...
I0522 18:22:45.720887   53868 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0522 18:22:45.720897   53868 clientconn.go:933] ClientConn switching balancer to "pick_first"
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BFlag --server-dry-run has been deprecated, --server-dry-run is deprecated and can be replaced with --dry-run=server.
pod/test-pod created (server dry run)
W0522 18:22:47.123431   67743 helpers.go:552] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
... skipping 7 lines ...
(Bpod "test-pod" deleted
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0522 18:22:50.116942   53868 client.go:360] parsed scheme: "endpoint"
I0522 18:22:50.116979   53868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0522 18:22:50.301490   53868 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj created (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
namespace/nsb created
apply.sh:154: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/a created
apply.sh:157: Successful get pods a -n nsb {{.metadata.name}}: a
(Bpod/b created
pod/a pruned
I0522 18:22:52.673738   57453 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1590171754-17060
apply.sh:161: Successful get pods b -n nsb {{.metadata.name}}: b
(BSuccessful
message:Error from server (NotFound): pods "a" not found
has:pods "a" not found
pod "b" deleted
apply.sh:171: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/a created
apply.sh:176: Successful get pods a {{.metadata.name}}: a
(BSuccessful
message:Error from server (NotFound): pods "b" not found
has:pods "b" not found
pod/b created
apply.sh:184: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:185: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
pod/a created
pod/b created
service/prune-svc created
apply.sh:197: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:198: Successful get pods b -n nsb {{.metadata.name}}: b
... skipping 33 lines ...
apply.sh:235: Successful get pods a -n nsb {{.metadata.name}}: a
(Bpod/b created
apply.sh:238: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod/b unchanged
pod/a pruned
Successful
message:Error from server (NotFound): pods "a" not found
has:pods "a" not found
apply.sh:245: Successful get pods b -n nsb {{.metadata.name}}: b
(Bnamespace "nsb" deleted
Successful
message:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:256: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:260: Successful get services a {{.metadata.name}}: a
(BSuccessful
message:The Service "a" is invalid: spec.clusterIP: Invalid value: "10.0.0.12": field is immutable
has:field is immutable
I0522 18:23:23.664137   57453 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"namespace-1590171759-18461", Name:"a", UID:"8b2f6e55-e82a-4023-b8f0-c84c0f19d98a", APIVersion:"v1", ResourceVersion:"745", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint namespace-1590171759-18461/a: Operation cannot be fulfilled on endpoints "a": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/namespace-1590171759-18461/a, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 8b2f6e55-e82a-4023-b8f0-c84c0f19d98a, UID in object meta: 
service/a configured
apply.sh:267: Successful get services a {{.spec.clusterIP}}: 10.0.0.12
(Bservice "a" deleted
configmap/test-the-map created
service/test-the-service created
deployment.apps/test-the-deployment created
... skipping 18 lines ...
(Bapply.sh:282: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:283: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
message:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:291: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0522 18:23:27.157408   57453 namespace_controller.go:196] Namespace has been deleted nsb
I0522 18:23:27.181468   53868 client.go:360] parsed scheme: "passthrough"
I0522 18:23:27.181521   53868 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0522 18:23:27.181531   53868 clientconn.go:933] ClientConn switching balancer to "pick_first"
Successful
message:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
message:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:299: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
apply.sh:305: Successful get configmaps {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:configmap/foo created
error: unable to recognize "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:311: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:317: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:pod/pod-a created
... skipping 5 lines ...
(Bpod "pod-a" deleted
pod "pod-c" deleted
apply.sh:325: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:329: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: unable to recognize "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
has:no matches for kind "Widget" in version "example.com/v1"
I0522 18:23:36.031772   53868 client.go:360] parsed scheme: "endpoint"
I0522 18:23:36.031815   53868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
Successful
message:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:335: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI0522 18:23:36.836006   53868 controller.go:606] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
apply.sh:338: Successful get widget foo {{.metadata.name}}: foo
... skipping 34 lines ...
I0522 18:23:42.906329   53868 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0522 18:23:42.962810   57453 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0522 18:23:42.962879   57453 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for resources.mygroup.example.com
I0522 18:23:42.962942   57453 shared_informer.go:240] Waiting for caches to sync for resource quota
I0522 18:23:43.063538   57453 shared_informer.go:247] Caches are synced for resource quota 
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 3 lines ...
namespace/namespace-1590171823-5832 created
Context "test" modified.
+++ [0522 18:23:43] Testing kubectl run
pod/nginx-extensions created (dry run)
pod/nginx-extensions created (server dry run)
W0522 18:23:44.205914   53868 cacher.go:151] Terminating all watchers from cacher *unstructured.Unstructured
E0522 18:23:44.207979   57453 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
run.sh:32: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Brun.sh:35: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/nginx-extensions created
run.sh:39: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: nginx-extensions:
(Bpod "nginx-extensions" deleted
Successful
message:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_filter_tests
+++ [0522 18:23:45] Creating namespace namespace-1590171825-16041
namespace/namespace-1590171825-16041 created
Context "test" modified.
+++ [0522 18:23:45] Testing kubectl create filter
E0522 18:23:45.622724   57453 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 7 lines ...
apps.sh:119: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:120: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:121: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/my-depl created
I0522 18:23:47.369721   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590171826-17385", Name:"my-depl", UID:"6aa2f6a5-bad8-4daa-a1a6-a762ae7e7f51", APIVersion:"apps/v1", ResourceVersion:"918", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set my-depl-76fb9d7d7d to 1
I0522 18:23:47.376145   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171826-17385", Name:"my-depl-76fb9d7d7d", UID:"652ded53-65fe-4f17-814b-e3eb44a276d6", APIVersion:"apps/v1", ResourceVersion:"919", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: my-depl-76fb9d7d7d-znz79
E0522 18:23:47.506715   57453 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:125: Successful get deployments my-depl {{.metadata.name}}: my-depl
(Bapps.sh:127: Successful get deployments my-depl {{.spec.template.metadata.labels.l1}}: l1
(Bapps.sh:128: Successful get deployments my-depl {{.spec.selector.matchLabels.l1}}: l1
(Bapps.sh:129: Successful get deployments my-depl {{.metadata.labels.l1}}: l1
(Bdeployment.apps/my-depl configured
apps.sh:134: Successful get deployments my-depl {{.spec.template.metadata.labels.l1}}: l1
... skipping 9 lines ...
(Bdeployment.apps/nginx created
I0522 18:23:49.881269   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590171826-17385", Name:"nginx", UID:"5d93956f-e89d-4f39-b73a-2fa5da20aae3", APIVersion:"apps/v1", ResourceVersion:"945", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9587c59df to 3
I0522 18:23:49.888287   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171826-17385", Name:"nginx-9587c59df", UID:"ba24974c-925d-4bbc-a3be-5c899070bf99", APIVersion:"apps/v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9587c59df-cw46v
I0522 18:23:49.892161   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171826-17385", Name:"nginx-9587c59df", UID:"ba24974c-925d-4bbc-a3be-5c899070bf99", APIVersion:"apps/v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9587c59df-zmqqd
I0522 18:23:49.892475   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171826-17385", Name:"nginx-9587c59df", UID:"ba24974c-925d-4bbc-a3be-5c899070bf99", APIVersion:"apps/v1", ResourceVersion:"946", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9587c59df-8s4cl
apps.sh:152: Successful get deployment nginx {{.metadata.name}}: nginx
(BE0522 18:23:52.945271   57453 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1590171826-17385\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1590171826-17385"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
E0522 18:23:58.760186   57453 replica_set.go:535] sync "namespace-1590171826-17385/nginx-9587c59df" failed with Operation cannot be fulfilled on replicasets.apps "nginx-9587c59df": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1590171826-17385/nginx-9587c59df, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: ba24974c-925d-4bbc-a3be-5c899070bf99, UID in object meta: 
deployment.apps/nginx configured
I0522 18:23:59.733154   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590171826-17385", Name:"nginx", UID:"f9773b96-187a-4b20-8cbb-45fb51a7940f", APIVersion:"apps/v1", ResourceVersion:"988", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6c499547c4 to 3
I0522 18:23:59.737595   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171826-17385", Name:"nginx-6c499547c4", UID:"53c024b3-12b9-4886-8f43-42ce43ae9684", APIVersion:"apps/v1", ResourceVersion:"989", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-45sj5
I0522 18:23:59.767797   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171826-17385", Name:"nginx-6c499547c4", UID:"53c024b3-12b9-4886-8f43-42ce43ae9684", APIVersion:"apps/v1", ResourceVersion:"989", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-vb2z8
I0522 18:23:59.768403   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171826-17385", Name:"nginx-6c499547c4", UID:"53c024b3-12b9-4886-8f43-42ce43ae9684", APIVersion:"apps/v1", ResourceVersion:"989", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-xwf84
Successful
message:        "name": "nginx2"
          "name": "nginx2"
has:"name": "nginx2"
E0522 18:24:02.633830   57453 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0522 18:24:05.298837   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590171826-17385", Name:"nginx", UID:"acf6fc1c-60d8-409c-9104-473f9611d0ae", APIVersion:"apps/v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6c499547c4 to 3
I0522 18:24:05.304543   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171826-17385", Name:"nginx-6c499547c4", UID:"347011b0-550c-45f5-93ec-4568ec88b96c", APIVersion:"apps/v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-6d924
I0522 18:24:05.308527   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171826-17385", Name:"nginx-6c499547c4", UID:"347011b0-550c-45f5-93ec-4568ec88b96c", APIVersion:"apps/v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-smrd6
I0522 18:24:05.308678   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171826-17385", Name:"nginx-6c499547c4", UID:"347011b0-550c-45f5-93ec-4568ec88b96c", APIVersion:"apps/v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-p8ppz
Successful
message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 294 lines ...
+++ [0522 18:24:12] Creating namespace namespace-1590171852-20229
namespace/namespace-1590171852-20229 created
Context "test" modified.
+++ [0522 18:24:12] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0522 18:24:13.316012   57453 shared_informer.go:240] Waiting for caches to sync for resource quota
I0522 18:24:13.316063   57453 shared_informer.go:247] Caches are synced for resource quota 
Successful
message:{
... skipping 25 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1590171852-20229 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1590171852-20229 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0522 18:24:15.038792   71795 loader.go:375] Config loaded from file:  /tmp/tmp.Yy6GTdVAER/.kube/config
I0522 18:24:15.040427   71795 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0522 18:24:15.077640   71795 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0522 18:24:15.079484   71795 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 623 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2020-05-22T18:24:23Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2020-05-22T18:24:23Z"}}, "name":"valid-pod", "namespace":"namespace-1590171862-1233", "resourceVersion":"1085", "selfLink":"/api/v1/namespaces/namespace-1590171862-1233/pods/valid-pod", "uid":"993e5314-d530-4506-ae52-cf48d0be1d8f"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2020-05-22T18:24:23Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2020-05-22T18:24:23Z"}],"name":"valid-pod","namespace":"namespace-1590171862-1233","resourceVersion":"1085","selfLink":"/api/v1/namespaces/namespace-1590171862-1233/pods/valid-pod","uid":"993e5314-d530-4506-ae52-cf48d0be1d8f"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2020-05-22T18:24:23Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2020-05-22T18:24:23Z]] name:valid-pod namespace:namespace-1590171862-1233 resourceVersion:1085 selfLink:/api/v1/namespaces/namespace-1590171862-1233/pods/valid-pod uid:993e5314-d530-4506-ae52-cf48d0be1d8f] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
... skipping 3 lines ...
valid-pod   0/1     Pending   0          1s
has:STATUS
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
E0522 18:24:25.854919   57453 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:pod/valid-pod
has not:STATUS
Successful
message:pod/valid-pod
has:pod/valid-pod
... skipping 141 lines ...
  terminationGracePeriodSeconds: 30
status:
  phase: Pending
  qosClass: Guaranteed
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 36 lines ...
+++ [0522 18:24:30] Creating namespace namespace-1590171870-23355
namespace/namespace-1590171870-23355 created
Context "test" modified.
+++ [0522 18:24:30] Testing kubectl exec POD COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [0522 18:24:31] Creating namespace namespace-1590171871-8595
namespace/namespace-1590171871-8595 created
Context "test" modified.
+++ [0522 18:24:31] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0522 18:24:32.600302   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171871-8595", Name:"frontend", UID:"c2501aac-0b1e-4d04-8308-048940e4fc9b", APIVersion:"apps/v1", ResourceVersion:"1147", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-49xfk
I0522 18:24:32.602844   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171871-8595", Name:"frontend", UID:"c2501aac-0b1e-4d04-8308-048940e4fc9b", APIVersion:"apps/v1", ResourceVersion:"1147", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-z54st
I0522 18:24:32.605491   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171871-8595", Name:"frontend", UID:"c2501aac-0b1e-4d04-8308-048940e4fc9b", APIVersion:"apps/v1", ResourceVersion:"1147", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dj86v
configmap/test-set-env-config created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-49xfk does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-49xfk does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"905a0fdf-9452-47f3-b6a5-58ce6cdada40","resourceVersion":"1170","creationTimestamp":"2020-05-22T18:24:34Z"}}
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"905a0fdf-9452-47f3-b6a5-58ce6cdada40","resourceVersion":"1171","creationTimestamp":"2020-05-22T18:24:34Z"},"data":{"key1":"config1"}}
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"905a0fdf-9452-47f3-b6a5-58ce6cdada40","resourceVersion":"1171","creationTimestamp":"2020-05-22T18:24:34Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"905a0fdf-9452-47f3-b6a5-58ce6cdada40"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 172 lines ...
has:Timeout exceeded while reading body
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 248 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [0522 18:24:49] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 380 lines ...
I0522 18:25:25.938409   53868 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0522 18:25:25.938419   53868 clientconn.go:933] ClientConn switching balancer to "pick_first"
Waiting for Get bars {{len .items}} --namespace=non-native-resources: expected: 0, got: 1
Waiting for Get bars {{len .items}} --namespace=non-native-resources: expected: 0, got: 1
Waiting for Get bars {{len .items}} --namespace=non-native-resources: expected: 0, got: 1

crd.sh:458: FAIL!
Get bars {{len .items}}
  Expected: 0
  Got:      1
(B
65 /home/prow/go/src/k8s.io/kubernetes/hack/lib/test.sh
(B
... skipping 3 lines ...
!!! [0522 18:25:54]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 run_crd_tests(...)
!!! [0522 18:25:54]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0522 18:25:54]  5: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:130 juLog(...)
!!! [0522 18:25:54]  6: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:584 record_command(...)
!!! [0522 18:25:54]  7: hack/make-rules/test-cmd.sh:150 runTests(...)
+++ exit code: 1
+++ error: 1
Error when running run_crd_tests
+++ [0522 18:25:55] Testing recursive resources
+++ [0522 18:25:55] Creating namespace namespace-1590171955-21627
namespace/namespace-1590171955-21627 created
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:Name:         busybox0
Namespace:    namespace-1590171955-21627
Priority:     0
Node:         <none>
... skipping 159 lines ...
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox0 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx created
I0522 18:25:58.517282   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590171955-21627", Name:"nginx", UID:"323e7536-3002-4745-9812-bac01716b0c1", APIVersion:"apps/v1", ResourceVersion:"1389", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9c6f87b75 to 3
I0522 18:25:58.521376   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171955-21627", Name:"nginx-9c6f87b75", UID:"4a99099d-f8ee-49a5-a34b-b9062c4e3d36", APIVersion:"apps/v1", ResourceVersion:"1390", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9c6f87b75-jpm89
I0522 18:25:58.524385   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171955-21627", Name:"nginx-9c6f87b75", UID:"4a99099d-f8ee-49a5-a34b-b9062c4e3d36", APIVersion:"apps/v1", ResourceVersion:"1390", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9c6f87b75-d6vjp
I0522 18:25:58.526572   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171955-21627", Name:"nginx-9c6f87b75", UID:"4a99099d-f8ee-49a5-a34b-b9062c4e3d36", APIVersion:"apps/v1", ResourceVersion:"1390", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9c6f87b75-qbnkw
... skipping 50 lines ...
Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: busybox0:busybox1:, got: busybox0:busybox1:nginx-9c6f87b75-d6vjp:nginx-9c6f87b75-jpm89:nginx-9c6f87b75-qbnkw:
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0522 18:26:02.691720   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590171955-21627", Name:"busybox0", UID:"734de1f3-f756-4832-b419-fd36950008c0", APIVersion:"v1", ResourceVersion:"1425", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-lpjhd
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0522 18:26:02.697818   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590171955-21627", Name:"busybox1", UID:"318b0547-c6a2-4b08-b2b4-053dcf160fbe", APIVersion:"v1", ResourceVersion:"1427", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-czxt7
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0522 18:26:05.185745   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590171955-21627", Name:"busybox0", UID:"734de1f3-f756-4832-b419-fd36950008c0", APIVersion:"v1", ResourceVersion:"1450", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-p55tw
I0522 18:26:05.201439   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590171955-21627", Name:"busybox1", UID:"318b0547-c6a2-4b08-b2b4-053dcf160fbe", APIVersion:"v1", ResourceVersion:"1456", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-dpnrf
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
I0522 18:26:06.280108   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590171955-21627", Name:"nginx1-deployment", UID:"8dff278c-43cd-4314-b90b-777b934762f0", APIVersion:"apps/v1", ResourceVersion:"1472", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-866c6857d5 to 2
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0522 18:26:06.285443   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171955-21627", Name:"nginx1-deployment-866c6857d5", UID:"3fbd7354-109f-4704-a8c3-f8712e474049", APIVersion:"apps/v1", ResourceVersion:"1473", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-866c6857d5-hbjdh
I0522 18:26:06.287416   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590171955-21627", Name:"nginx0-deployment", UID:"6724a09e-f89e-457d-b2e2-5b15cc8f0a8a", APIVersion:"apps/v1", ResourceVersion:"1474", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-ff7db88b6 to 2
I0522 18:26:06.290413   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171955-21627", Name:"nginx0-deployment-ff7db88b6", UID:"dff6c503-b6fa-48c6-be2f-8e6332458b6a", APIVersion:"apps/v1", ResourceVersion:"1478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-ff7db88b6-6xpbk
I0522 18:26:06.291402   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171955-21627", Name:"nginx1-deployment-866c6857d5", UID:"3fbd7354-109f-4704-a8c3-f8712e474049", APIVersion:"apps/v1", ResourceVersion:"1473", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-866c6857d5-ps92z
I0522 18:26:06.296057   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590171955-21627", Name:"nginx0-deployment-ff7db88b6", UID:"dff6c503-b6fa-48c6-be2f-8e6332458b6a", APIVersion:"apps/v1", ResourceVersion:"1478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-ff7db88b6-8gzfb
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0522 18:26:08.000360   53868 client.go:360] parsed scheme: "passthrough"
I0522 18:26:08.000442   53868 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0522 18:26:08.000452   53868 clientconn.go:933] ClientConn switching balancer to "pick_first"
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0522 18:26:09.148264   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590171955-21627", Name:"busybox0", UID:"e55fda74-655b-471a-846f-f8bf21d59f91", APIVersion:"v1", ResourceVersion:"1524", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-z69wl
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0522 18:26:09.154534   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590171955-21627", Name:"busybox1", UID:"80a9bc21-23ce-4888-a822-c297a90e601a", APIVersion:"v1", ResourceVersion:"1526", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-fhnx5
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0522 18:26:10.221324   57453 gc_controller.go:78] PodGC is force deleting Pod: namespace-1590171955-21627/busybox1-fhnx5
E0522 18:26:10.222794   57453 gc_controller.go:236] pods "busybox1-fhnx5" not found
I0522 18:26:10.222814   57453 gc_controller.go:78] PodGC is force deleting Pod: namespace-1590171955-21627/busybox0-z69wl
E0522 18:26:10.224725   57453 gc_controller.go:236] pods "busybox0-z69wl" not found
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0522 18:26:10] Testing kubectl(v1:namespaces)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1446: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bnamespace "my-namespace" deleted
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1455: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
... skipping 31 lines ...
namespace "namespace-1590171876-25179" deleted
namespace "namespace-1590171876-5045" deleted
namespace "namespace-1590171878-30108" deleted
namespace "namespace-1590171881-19089" deleted
namespace "namespace-1590171883-4903" deleted
namespace "namespace-1590171955-21627" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1590171672-22432" deleted
... skipping 29 lines ...
namespace "namespace-1590171876-25179" deleted
namespace "namespace-1590171876-5045" deleted
namespace "namespace-1590171878-30108" deleted
namespace "namespace-1590171881-19089" deleted
namespace "namespace-1590171883-4903" deleted
namespace "namespace-1590171955-21627" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1462: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1463: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
resourcequota/test-quota created (server dry run)
... skipping 10 lines ...
core.sh:1486: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1490: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1494: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1496: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1503: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1507: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 154 lines ...
+++ command: run_client_config_tests
+++ [0522 18:26:49] Creating namespace namespace-1590172009-21676
namespace/namespace-1590172009-21676 created
Context "test" modified.
+++ [0522 18:26:49] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 46 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 39 lines ...
Labels:         controller-uid=38070219-76b9-47c8-8be6-ebf3f3bdcc0c
                job-name=test-job
Annotations:    cronjob.kubernetes.io/instantiate: manual
Parallelism:    1
Completions:    1
Start Time:     Fri, 22 May 2020 18:27:00 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=38070219-76b9-47c8-8be6-ebf3f3bdcc0c
           job-name=test-job
  Containers:
   pi:
    Image:      k8s.gcr.io/perl
... skipping 464 lines ...
  type: ClusterIP
status:
  loadBalancer: {}
Successful
message:kubectl-create kubectl-set
has:kubectl-set
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1007: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:1020: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:1027: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1031: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/redis-master created
... skipping 122 lines ...
 (dry run)
daemonset.apps/bind rolled back (server dry run)
apps.sh:87: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
E0522 18:27:27.227310   57453 daemon_controller.go:291] namespace-1590172044-13163/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1590172044-13163", SelfLink:"/apis/apps/v1/namespaces/namespace-1590172044-13163/daemonsets/bind", UID:"79f4df9d-c8e7-48df-b1fa-d6a4db6772ac", ResourceVersion:"1948", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725768844, loc:(*time.Location)(0x717e8e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1590172044-13163\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001e6a3e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001e6a400)}, v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001e6a440), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001e6a480)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001e6a4c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001e6a500)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001e6a5c0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0024ec0b8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00018bb90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001e6a660), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002056000)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0024ec10c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:92: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:93: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:98: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
E0522 18:27:28.061004   57453 daemon_controller.go:291] namespace-1590172044-13163/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1590172044-13163", SelfLink:"/apis/apps/v1/namespaces/namespace-1590172044-13163/daemonsets/bind", UID:"79f4df9d-c8e7-48df-b1fa-d6a4db6772ac", ResourceVersion:"1953", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725768844, loc:(*time.Location)(0x717e8e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1590172044-13163\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001d16820), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001d168e0)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001d16940), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001d16980)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001d169c0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001d16a00)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001d16a40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f74538), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003b525b0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001d16a80), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001c423e0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002f7458c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:101: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:102: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:103: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_rc_tests
... skipping 32 lines ...
Namespace:    namespace-1590172048-14289
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1590172048-14289
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1590172048-14289
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1590172048-14289
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1590172048-14289
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 20 lines ...
Namespace:    namespace-1590172048-14289
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1590172048-14289
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1590172048-14289
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 15 lines ...
(Bcore.sh:1211: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0522 18:27:31.910630   57453 replica_set.go:200] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1590172048-14289 /api/v1/namespaces/namespace-1590172048-14289/replicationcontrollers/frontend 0e7181f4-5a89-4f75-8a2e-468eb350333d 1990 2 2020-05-22 18:27:30 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  [{kube-controller-manager Update v1 2020-05-22 18:27:30 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}} {kubectl-create Update v1 2020-05-22 18:27:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:replicas":{},"f:selector":{".":{},"f:app":{},"f:tier":{}},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0032f4578 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0522 18:27:31.916349   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590172048-14289", Name:"frontend", UID:"0e7181f4-5a89-4f75-8a2e-468eb350333d", APIVersion:"v1", ResourceVersion:"1990", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-l96pd
core.sh:1215: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1219: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1223: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1227: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0522 18:27:32.700483   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590172048-14289", Name:"frontend", UID:"0e7181f4-5a89-4f75-8a2e-468eb350333d", APIVersion:"v1", ResourceVersion:"1996", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-rqjhk
core.sh:1231: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1235: Successful get rc frontend {{.spec.replicas}}: 3
... skipping 31 lines ...
(Bdeployment.apps "nginx-deployment" deleted
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0522 18:27:35.971769   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590172048-14289", Name:"nginx-deployment", UID:"497027cb-c7e9-4b04-a9c2-2aad8a69ca0c", APIVersion:"apps/v1", ResourceVersion:"2103", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6866878c7b to 3
I0522 18:27:35.975823   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172048-14289", Name:"nginx-deployment-6866878c7b", UID:"f3e77ba4-fd5a-45ae-b8d8-875c9ddcac2c", APIVersion:"apps/v1", ResourceVersion:"2104", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6866878c7b-v5jtg
I0522 18:27:35.978548   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172048-14289", Name:"nginx-deployment-6866878c7b", UID:"f3e77ba4-fd5a-45ae-b8d8-875c9ddcac2c", APIVersion:"apps/v1", ResourceVersion:"2104", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6866878c7b-g59rs
... skipping 23 lines ...
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
service "frontend-5" deleted
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
Successful
message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1378: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1382: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 24 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0522 18:27:44.012981   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590172048-14289", Name:"nginx-deployment-resources", UID:"8b02aba0-9a9c-4954-9c5b-332a264c3108", APIVersion:"apps/v1", ResourceVersion:"2266", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-79666b9cd9 to 3
I0522 18:27:44.017143   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172048-14289", Name:"nginx-deployment-resources-79666b9cd9", UID:"a6d0d95d-80c7-4d8e-bfc8-05d62034bf92", APIVersion:"apps/v1", ResourceVersion:"2267", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79666b9cd9-9dcbw
I0522 18:27:44.020068   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172048-14289", Name:"nginx-deployment-resources-79666b9cd9", UID:"a6d0d95d-80c7-4d8e-bfc8-05d62034bf92", APIVersion:"apps/v1", ResourceVersion:"2267", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79666b9cd9-lc6k8
I0522 18:27:44.023733   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172048-14289", Name:"nginx-deployment-resources-79666b9cd9", UID:"a6d0d95d-80c7-4d8e-bfc8-05d62034bf92", APIVersion:"apps/v1", ResourceVersion:"2267", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79666b9cd9-mg4nc
core.sh:1397: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1398: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1399: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0522 18:27:44.554563   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590172048-14289", Name:"nginx-deployment-resources", UID:"8b02aba0-9a9c-4954-9c5b-332a264c3108", APIVersion:"apps/v1", ResourceVersion:"2280", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-8b888884f to 1
I0522 18:27:44.565624   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172048-14289", Name:"nginx-deployment-resources-8b888884f", UID:"c494495b-82ce-469e-99f2-417cca7468c4", APIVersion:"apps/v1", ResourceVersion:"2281", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-8b888884f-bs8b7
core.sh:1402: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1403: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0522 18:27:45.089131   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590172048-14289", Name:"nginx-deployment-resources", UID:"8b02aba0-9a9c-4954-9c5b-332a264c3108", APIVersion:"apps/v1", ResourceVersion:"2290", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-79666b9cd9 to 2
I0522 18:27:45.096242   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172048-14289", Name:"nginx-deployment-resources-79666b9cd9", UID:"a6d0d95d-80c7-4d8e-bfc8-05d62034bf92", APIVersion:"apps/v1", ResourceVersion:"2294", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-79666b9cd9-9dcbw
I0522 18:27:45.097526   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590172048-14289", Name:"nginx-deployment-resources", UID:"8b02aba0-9a9c-4954-9c5b-332a264c3108", APIVersion:"apps/v1", ResourceVersion:"2293", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-76f48f979f to 1
I0522 18:27:45.106389   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172048-14289", Name:"nginx-deployment-resources-76f48f979f", UID:"ac937bef-5f83-41ce-9c69-b253de961993", APIVersion:"apps/v1", ResourceVersion:"2298", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-76f48f979f-kjgt5
core.sh:1408: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 387 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1419: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1420: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1421: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 47 lines ...
                pod-template-hash=c9cc54d87
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=c9cc54d87
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 103 lines ...
(B    Image:	k8s.gcr.io/nginx:test-cmd
deployment.apps/nginx rolled back (server dry run)
apps.sh:308: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
I0522 18:27:57.500623   57453 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1590172048-14289
apps.sh:312: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:315: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
apps.sh:319: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0522 18:28:00.754360   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590172067-22365", Name:"nginx", UID:"72392d25-a2b3-4157-8706-27cd61657c4d", APIVersion:"apps/v1", ResourceVersion:"2524", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-697546885c to 0
I0522 18:28:00.762118   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172067-22365", Name:"nginx-697546885c", UID:"101e409d-fc89-4b01-bd1b-faa8b47c3c1b", APIVersion:"apps/v1", ResourceVersion:"2528", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-697546885c-xmzc2
I0522 18:28:00.762489   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590172067-22365", Name:"nginx", UID:"72392d25-a2b3-4157-8706-27cd61657c4d", APIVersion:"apps/v1", ResourceVersion:"2526", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-666597b69b to 1
I0522 18:28:00.766769   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172067-22365", Name:"nginx-666597b69b", UID:"678b045d-f898-4e8e-9a0d-0da5470bf74a", APIVersion:"apps/v1", ResourceVersion:"2531", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-666597b69b-hqzdg
Successful
... skipping 149 lines ...
(Bapps.sh:363: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0522 18:28:04.384722   57453 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590172067-22365", Name:"nginx-deployment", UID:"f2e16bfe-c6ba-4a8b-8cfd-d436b8bb2f2e", APIVersion:"apps/v1", ResourceVersion:"2597", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6d5f69bf98 to 1
I0522 18:28:04.390342   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172067-22365", Name:"nginx-deployment-6d5f69bf98", UID:"d2496f89-c6ec-4558-9631-2e6cd90104ab", APIVersion:"apps/v1", ResourceVersion:"2598", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6d5f69bf98-4g9cb
apps.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:367: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:372: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:373: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:376: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:377: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
... skipping 67 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
+++ [0522 18:28:11] Creating namespace namespace-1590172091-21988
namespace/namespace-1590172091-21988 created
Context "test" modified.
+++ [0522 18:28:11] Testing kubectl(v1:replicasets)
E0522 18:28:11.261682   57453 replica_set.go:535] sync "namespace-1590172067-22365/nginx-deployment-5859b66c86" failed with replicasets.apps "nginx-deployment-5859b66c86" not found
E0522 18:28:11.310727   57453 replica_set.go:535] sync "namespace-1590172067-22365/nginx-deployment-dfd7cb955" failed with replicasets.apps "nginx-deployment-dfd7cb955" not found
E0522 18:28:11.360873   57453 replica_set.go:535] sync "namespace-1590172067-22365/nginx-deployment-75bb56f9c" failed with replicasets.apps "nginx-deployment-75bb56f9c" not found
apps.sh:540: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0522 18:28:11.686326   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172091-21988", Name:"frontend", UID:"1ff71aa3-a1aa-442e-abc1-94b9b1ff48fb", APIVersion:"apps/v1", ResourceVersion:"2792", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-248g2
I0522 18:28:11.690011   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172091-21988", Name:"frontend", UID:"1ff71aa3-a1aa-442e-abc1-94b9b1ff48fb", APIVersion:"apps/v1", ResourceVersion:"2792", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r7n9z
I0522 18:28:11.693507   57453 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590172091-21988", Name:"frontend", UID:"1ff71aa3-a1aa-442e-abc1-94b9b1ff48fb", APIVersion:"apps/v1", ResourceVersion:"2792", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-k8hcz
+++ [0522 18:28:11] Deleting rs
... skipping 34 lines ...
Namespace:    namespace-1590172091-21988
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1590172091-21988
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1590172091-21988
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1590172091-21988
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1590172091-21988
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1590172091-21988
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1590172091-21988
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1590172091-21988
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 216 lines ...
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:705: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(BSuccessful
message:kubectl-autoscale
has:kubectl-autoscale
horizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 61 lines ...
(Bapps.sh:465: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:466: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps/nginx rolled back
apps.sh:469: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:470: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:474: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:475: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:478: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(Bapps.sh:479: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
Name:         mock
Namespace:    namespace-1590172114-29883
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 56 lines ...
Name:         mock
Namespace:    namespace-1590172114-29883
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 57 lines ...
Name:         mock
Namespace:    namespace-1590172114-29883
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 44 lines ...
Namespace:    namespace-1590172114-29883
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1590172114-29883
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 109 lines ...
storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(Bpersistentvolume "pv0001" deleted
persistentvolume/pv0002 created
storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
(Bpersistentvolume "pv0002" deleted
persistentvolume/pv0003 created
E0522 18:28:55.458325   57453 pv_protection_controller.go:118] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
(Bpersistentvolume "pv0003" deleted
storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0522 18:28:56.162332   57453 pv_protection_controller.go:118] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:45: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
persistentvolume "pv0001" deleted
has:warning: deleting cluster-scoped resources
Successful
... skipping 539 lines ...
yes
has:the server doesn't have a resource type
Successful
message:yes
has:yes
Successful
message:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
Successful
message:yes
0
has:0
Successful
message:0
has:0
Successful
message:yes
has not:Warning
FAIL!
message:yes
has not:Warning: the server doesn't have a resource type 'foo'
805 /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh
!!! [0522 18:29:04] Call tree:
!!! [0522 18:29:04]  1: hack/make-rules/test-cmd.sh:150 runTests(...)
I0522 18:29:04.679916   53868 controller.go:181] Shutting down kubernetes service endpoint reconciler
... skipping 14 lines ...
I0522 18:29:04.680934   53868 secure_serving.go:231] Stopped listening on 127.0.0.1:6443
I0522 18:29:04.680780   53868 dynamic_serving_content.go:145] Shutting down serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key
I0522 18:29:04.681029   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.681083   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.681084   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.681110   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.681176   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.681278   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.681278   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.681296   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.681316   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.681361   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.681383   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.681441   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.681443   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.681474   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.681518   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.681538   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.681566   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.681571   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.681625   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0522 18:29:04.681631   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.681643   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.681764   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.681839   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.681882   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.681890   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0522 18:29:04.681910   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0522 18:29:04.681917   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0522 18:29:04.681934   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0522 18:29:04.681990   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.682014   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.682028   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.682060   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.682115   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.682132   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.682162   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.682189   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.682284   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.682290   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.682322   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.682419   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0522 18:29:04.682497   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0522 18:29:04.682556   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0522 18:29:04.682602   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.682704   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.682712   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.682745   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.682817   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.682826   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.682889   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.682893   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.682902   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.682911   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.682932   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.683105   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.683239   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.683374   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.683392   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.683472   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.683505   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.683512   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.683555   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.682747   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.683643   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.683857   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.683876   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.683918   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0522 18:29:04.683969   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.683977   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.684019   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.684066   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.684066   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.684072   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.684128   53868 controller.go:87] Shutting down OpenAPI AggregationController
I0522 18:29:04.684154   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.684181   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.684208   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.684244   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.684253   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.684285   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.684305   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.684343   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0522 18:29:04.684347   53868 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0522 18:29:04.684456   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.684492   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
E0522 18:29:04.684528   53868 controller.go:184] rpc error: code = Unavailable desc = transport is closing
I0522 18:29:04.684583   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0522 18:29:04.684585   53868 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
junit report dir: /logs/artifacts
+++ [0522 18:29:04] Clean up complete
make: *** [Makefile:316: test-cmd] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...