This job view page is being replaced by Spyglass soon. Check out the new job view.
PReloyekunle: Extended CRD Validation
ResultFAILURE
Tests 0 failed / 106 succeeded
Started2020-02-14 17:56
Elapsed23m50s
Revision544b010828adf1f6be5fae1b17854f51675a5d41
Refs 88076

No Test Failures!


Show 106 Passed Tests

Error lines from build-log.txt

Docker in Docker enabled, initializing...
================================================================================
Starting Docker: docker.
Waiting for docker to be ready, sleeping for 1 seconds.
[Barnacle] 2020/02/14 17:56:08 Cleaning up Docker data root...
[Barnacle] 2020/02/14 17:56:08 Removing all containers.
[Barnacle] 2020/02/14 17:56:08 Failed to list containers: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40
[Barnacle] 2020/02/14 17:56:08 Removing recently created images.
[Barnacle] 2020/02/14 17:56:08 Failed to list images: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40
[Barnacle] 2020/02/14 17:56:08 Pruning dangling images.
[Barnacle] 2020/02/14 17:56:08 Failed to list images: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40
[Barnacle] 2020/02/14 17:56:08 Pruning volumes.
[Barnacle] 2020/02/14 17:56:08 Failed to prune volumes: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40
[Barnacle] 2020/02/14 17:56:08 Done cleaning up Docker data root.
Remaining docker images and volumes are:
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
DRIVER              VOLUME NAME
Cleaning up binfmt_misc ...
================================================================================
... skipping 40 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 155: bogus-expected-to-fail: command not found
!!! [0214 18:03:40] Call tree:
!!! [0214 18:03:40]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0214 18:03:40]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0214 18:03:40]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:131 juLog(...)
!!! [0214 18:03:40]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:159 record_command(...)
!!! [0214 18:03:40]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0214 18:03:40] Running kubeadm tests
+++ [0214 18:03:53] Building go targets for linux/amd64:
    cmd/kubeadm
+++ [0214 18:05:10] Running tests without code coverage
{"Time":"2020-02-14T18:07:29.501822777Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t78.501s\n"}
✓  cmd/kubeadm/test/cmd (1m18.501s)
... skipping 302 lines ...
+++ [0214 18:10:23] Building kube-controller-manager
+++ [0214 18:10:34] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [0214 18:11:29] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0214 18:11:30.538735   55474 serving.go:313] Generated self-signed cert in-memory
W0214 18:11:31.738239   55474 authentication.go:410] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0214 18:11:31.738322   55474 authentication.go:268] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0214 18:11:31.738335   55474 authentication.go:292] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0214 18:11:31.738359   55474 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0214 18:11:31.738384   55474 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0214 18:11:31.738410   55474 controllermanager.go:161] Version: v1.18.0-alpha.5.127+7445e5a7cf869a
I0214 18:11:31.742832   55474 secure_serving.go:178] Serving securely on [::]:10257
I0214 18:11:31.742984   55474 tlsconfig.go:241] Starting DynamicServingCertificateController
I0214 18:11:31.743322   55474 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0214 18:11:31.743395   55474 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
... skipping 68 lines ...
I0214 18:11:32.535168   55474 shared_informer.go:206] Waiting for caches to sync for endpoint
I0214 18:11:32.535526   55474 controllermanager.go:533] Started "disruption"
W0214 18:11:32.535661   55474 controllermanager.go:512] "tokencleaner" is disabled
I0214 18:11:32.535555   55474 disruption.go:331] Starting disruption controller
I0214 18:11:32.535934   55474 shared_informer.go:206] Waiting for caches to sync for disruption
I0214 18:11:32.536075   55474 node_lifecycle_controller.go:77] Sending events to api server
E0214 18:11:32.536338   55474 core.go:231] failed to start cloud node lifecycle controller: no cloud provider provided
W0214 18:11:32.539983   55474 controllermanager.go:525] Skipping "cloud-node-lifecycle"
I0214 18:11:32.540529   55474 controllermanager.go:533] Started "pv-protection"
W0214 18:11:32.540553   55474 controllermanager.go:525] Skipping "root-ca-cert-publisher"
I0214 18:11:32.540647   55474 pv_protection_controller.go:83] Starting PV protection controller
I0214 18:11:32.540681   55474 shared_informer.go:206] Waiting for caches to sync for PV protection
I0214 18:11:32.541142   55474 controllermanager.go:533] Started "podgc"
... skipping 81 lines ...
I0214 18:11:32.834325   55474 controllermanager.go:533] Started "csrapproving"
I0214 18:11:32.834890   55474 certificate_controller.go:118] Starting certificate controller "csrapproving"
I0214 18:11:32.834911   55474 shared_informer.go:206] Waiting for caches to sync for certificate-csrapproving
I0214 18:11:32.835162   55474 controllermanager.go:533] Started "csrcleaner"
W0214 18:11:32.835180   55474 controllermanager.go:512] "bootstrapsigner" is disabled
I0214 18:11:32.835220   55474 cleaner.go:82] Starting CSR cleaner controller
E0214 18:11:32.835521   55474 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0214 18:11:32.835544   55474 controllermanager.go:525] Skipping "service"
I0214 18:11:32.835903   55474 controllermanager.go:533] Started "replicationcontroller"
I0214 18:11:32.835939   55474 replica_set.go:181] Starting replicationcontroller controller
I0214 18:11:32.835954   55474 shared_informer.go:206] Waiting for caches to sync for ReplicationController
I0214 18:11:32.915344   55474 shared_informer.go:213] Caches are synced for ClusterRoleAggregator 
I0214 18:11:32.927957   55474 shared_informer.go:213] Caches are synced for namespace 
I0214 18:11:32.935383   55474 shared_informer.go:213] Caches are synced for certificate-csrapproving 
E0214 18:11:32.935471   55474 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E0214 18:11:32.941790   55474 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0214 18:11:32.942046   55474 shared_informer.go:213] Caches are synced for service account 
I0214 18:11:32.944927   51991 controller.go:606] quota admission added evaluator for: serviceaccounts
E0214 18:11:32.953345   55474 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0214 18:11:33.315641   55474 shared_informer.go:213] Caches are synced for PVC protection 
I0214 18:11:33.315751   55474 shared_informer.go:213] Caches are synced for stateful set 
I0214 18:11:33.329952   55474 shared_informer.go:213] Caches are synced for endpoint_slice 
I0214 18:11:33.331367   55474 shared_informer.go:213] Caches are synced for HPA 
I0214 18:11:33.335564   55474 shared_informer.go:213] Caches are synced for endpoint 
I0214 18:11:33.335579   55474 shared_informer.go:213] Caches are synced for job 
I0214 18:11:33.341821   55474 shared_informer.go:213] Caches are synced for ReplicaSet 
I0214 18:11:33.342929   55474 shared_informer.go:213] Caches are synced for deployment 
I0214 18:11:33.429271   55474 shared_informer.go:213] Caches are synced for expand 
I0214 18:11:33.436048   55474 shared_informer.go:213] Caches are synced for disruption 
I0214 18:11:33.436081   55474 disruption.go:339] Sending events to api server.
I0214 18:11:33.436265   55474 shared_informer.go:213] Caches are synced for ReplicationController 
I0214 18:11:33.440848   55474 shared_informer.go:213] Caches are synced for PV protection 
W0214 18:11:33.443948   55474 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0214 18:11:33.447895   55474 shared_informer.go:213] Caches are synced for taint 
I0214 18:11:33.447947   55474 taint_manager.go:187] Starting NoExecuteTaintManager
I0214 18:11:33.448061   55474 node_lifecycle_controller.go:1444] Initializing eviction metric for zone: 
I0214 18:11:33.448158   55474 node_lifecycle_controller.go:1210] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0214 18:11:33.448164   55474 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"31941936-670f-48d1-a685-40c718af1a54", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
I0214 18:11:33.459400   55474 shared_informer.go:213] Caches are synced for attach detach 
... skipping 87 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0214 18:11:40] Creating namespace namespace-1581703900-27463
namespace/namespace-1581703900-27463 created
Context "test" modified.
+++ [0214 18:11:41] Testing RESTMapper
+++ [0214 18:11:41] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 57 lines ...
namespace/namespace-1581703909-21196 created
Context "test" modified.
+++ [0214 18:11:50] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 58 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 25 lines ...
namespace/namespace-1581703928-14082 created
Context "test" modified.
+++ [0214 18:12:08] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:155: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:156: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:157: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 411 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
core.sh:189: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:197: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:201: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:205: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:209: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:214: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 19 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:258: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:264: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:268: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:274: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 206 lines ...
(Bpod/valid-pod patched
core.sh:517: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
(Bpod/valid-pod patched
core.sh:522: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
(Bpod/valid-pod patched
core.sh:538: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0214 18:13:16] "kubectl patch with resourceVersion 608" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:562: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
W0214 18:13:19.220079   55474 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
node/node-v1-test created
node/node-v1-test replaced
core.sh:599: Successful get node node-v1-test {{.metadata.annotations.a}}: b
(Bnode "node-v1-test" deleted
core.sh:606: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(Bcore.sh:609: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
... skipping 23 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:632: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:636: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:640: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:644: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:648: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 83 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0214 18:13:44] Creating namespace namespace-1581704024-31842
namespace/namespace-1581704024-31842 created
Context "test" modified.
+++ [0214 18:13:44] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0214 18:13:45] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 17 lines ...
(Bpod "test-pod" deleted
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0214 18:13:52.082646   51991 client.go:361] parsed scheme: "endpoint"
I0214 18:13:52.083012   51991 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 18:13:52.087389   51991 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj serverside-applied (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 12 lines ...
(Bpod "nginx-extensions" deleted
Successful
message:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0214 18:13:56] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
I0214 18:13:57.599812   55474 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1581704017-2084
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests
... skipping 32 lines ...
I0214 18:14:03.168528   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704037-9518", Name:"nginx", UID:"ce9218e5-5245-4692-a1c6-c1ddf83cf1de", APIVersion:"apps/v1", ResourceVersion:"695", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
I0214 18:14:03.176221   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704037-9518", Name:"nginx-8484dd655", UID:"f06f034c-0787-4a97-a2db-aeb03ea71d1f", APIVersion:"apps/v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-ggxbw
I0214 18:14:03.179722   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704037-9518", Name:"nginx-8484dd655", UID:"f06f034c-0787-4a97-a2db-aeb03ea71d1f", APIVersion:"apps/v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-w5zmx
I0214 18:14:03.180796   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704037-9518", Name:"nginx-8484dd655", UID:"f06f034c-0787-4a97-a2db-aeb03ea71d1f", APIVersion:"apps/v1", ResourceVersion:"696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-bnrx6
apps.sh:149: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1581704037-9518\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1581704037-9518"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
E0214 18:14:12.880629   55474 replica_set.go:535] sync "namespace-1581704037-9518/nginx-8484dd655" failed with Operation cannot be fulfilled on replicasets.apps "nginx-8484dd655": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1581704037-9518/nginx-8484dd655, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: f06f034c-0787-4a97-a2db-aeb03ea71d1f, UID in object meta: 
deployment.apps/nginx configured
I0214 18:14:13.594404   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704037-9518", Name:"nginx", UID:"a423fae1-2595-4690-8762-9e0d32d30f08", APIVersion:"apps/v1", ResourceVersion:"737", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
I0214 18:14:13.601086   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704037-9518", Name:"nginx-668b6c7744", UID:"0c76a557-b5a2-4389-9ccf-3661b2b61516", APIVersion:"apps/v1", ResourceVersion:"738", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-ljrb7
I0214 18:14:13.603945   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704037-9518", Name:"nginx-668b6c7744", UID:"0c76a557-b5a2-4389-9ccf-3661b2b61516", APIVersion:"apps/v1", ResourceVersion:"738", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-bqljm
I0214 18:14:13.606159   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704037-9518", Name:"nginx-668b6c7744", UID:"0c76a557-b5a2-4389-9ccf-3661b2b61516", APIVersion:"apps/v1", ResourceVersion:"738", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-8szjr
Successful
... skipping 146 lines ...
+++ [0214 18:14:26] Creating namespace namespace-1581704066-11766
namespace/namespace-1581704066-11766 created
Context "test" modified.
+++ [0214 18:14:27] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1581704066-11766 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1581704066-11766 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0214 18:14:31.810921   66317 loader.go:375] Config loaded from file:  /tmp/tmp.NCoOBbThqQ/.kube/config
I0214 18:14:31.812325   66317 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
I0214 18:14:31.868982   66317 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0214 18:14:31.872265   66317 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 2 milliseconds
... skipping 482 lines ...
Successful
message:NAME    DATA   AGE
one     0      0s
three   0      0s
two     0      0s
STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
Successful
message:STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
+++ [0214 18:14:40] Creating namespace namespace-1581704080-29987
namespace/namespace-1581704080-29987 created
Context "test" modified.
get.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
... skipping 56 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2020-02-14T18:14:41Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1581704080-29987", "resourceVersion":"835", "selfLink":"/api/v1/namespaces/namespace-1581704080-29987/pods/valid-pod", "uid":"37d0148a-cee6-414a-9a2f-35a3b8753e29"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2020-02-14T18:14:41Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1581704080-29987","resourceVersion":"835","selfLink":"/api/v1/namespaces/namespace-1581704080-29987/pods/valid-pod","uid":"37d0148a-cee6-414a-9a2f-35a3b8753e29"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2020-02-14T18:14:41Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1581704080-29987 resourceVersion:835 selfLink:/api/v1/namespaces/namespace-1581704080-29987/pods/valid-pod uid:37d0148a-cee6-414a-9a2f-35a3b8753e29] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:STATUS
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:valid-pod
Successful
message:pod/valid-pod
status/<unknown>
has not:STATUS
Successful
... skipping 45 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has not:STATUS
... skipping 42 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 35 lines ...
+++ command: run_kubectl_exec_pod_tests
+++ [0214 18:14:51] Creating namespace namespace-1581704091-14520
namespace/namespace-1581704091-14520 created
Context "test" modified.
+++ [0214 18:14:51] Testing kubectl exec POD COMMAND
Successful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 2 lines ...
+++ command: run_kubectl_exec_resource_name_tests
+++ [0214 18:14:53] Creating namespace namespace-1581704093-16880
namespace/namespace-1581704093-16880 created
Context "test" modified.
+++ [0214 18:14:53] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:error: the server doesn't have a resource type "foo"
has:error:
Successful
message:Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0214 18:14:54.877329   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704093-16880", Name:"frontend", UID:"e6e0c566-3661-42f6-adc1-16a3583c04e7", APIVersion:"apps/v1", ResourceVersion:"901", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-n5krf
I0214 18:14:54.879500   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704093-16880", Name:"frontend", UID:"e6e0c566-3661-42f6-adc1-16a3583c04e7", APIVersion:"apps/v1", ResourceVersion:"901", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-dxtkt
I0214 18:14:54.882874   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704093-16880", Name:"frontend", UID:"e6e0c566-3661-42f6-adc1-16a3583c04e7", APIVersion:"apps/v1", ResourceVersion:"901", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qrgpb
configmap/test-set-env-config created
Successful
message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
Successful
message:Error from server (BadRequest): pod frontend-dxtkt does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod frontend-dxtkt does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
W0214 18:14:57.094493   67471 helpers.go:534] --dry-run is deprecated and can be replaced with --dry-run=client.
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
W0214 18:14:57.464714   67500 helpers.go:534] --dry-run is deprecated and can be replaced with --dry-run=client.
Successful
... skipping 4 lines ...
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"b4cb6e5f-a467-4980-a47b-1f4eaeb4502d","resourceVersion":"926","creationTimestamp":"2020-02-14T18:14:57Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"b4cb6e5f-a467-4980-a47b-1f4eaeb4502d"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
valid-pod   0/1     Pending   0          0s
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:Timeout exceeded while reading body
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          2s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 158 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [0214 18:15:21] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 195 lines ...
(Bcrd.sh:450: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
+++ [0214 18:16:03] Testing recursive resources
+++ [0214 18:16:03] Creating namespace namespace-1581704163-5421
W0214 18:16:03.199478   51991 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0214 18:16:03.200933   55474 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 18:16:03.201699   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/namespace-1581704163-5421 created
W0214 18:16:03.423266   51991 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0214 18:16:03.425460   55474 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 18:16:03.426175   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Context "test" modified.
W0214 18:16:03.667477   51991 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0214 18:16:03.669241   55474 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 18:16:03.669988   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0214 18:16:03.860755   51991 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0214 18:16:03.861793   55474 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 18:16:03.862728   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0214 18:16:05.268618   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0214 18:16:05.904820   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 18:16:06.099062   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BI0214 18:16:06.511474   55474 namespace_controller.go:185] Namespace has been deleted non-native-resources
Successful
message:Name:         busybox0
Namespace:    namespace-1581704163-5421
Priority:     0
... skipping 155 lines ...
QoS Class:        BestEffort
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0214 18:16:06.924330   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
I0214 18:16:08.603454   55474 shared_informer.go:206] Waiting for caches to sync for garbage collector
I0214 18:16:08.603519   55474 shared_informer.go:213] Caches are synced for garbage collector 
I0214 18:16:08.692497   55474 shared_informer.go:206] Waiting for caches to sync for resource quota
I0214 18:16:08.692548   55474 shared_informer.go:213] Caches are synced for resource quota 
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx created
... skipping 46 lines ...
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:extensions/v1beta1
E0214 18:16:10.483722   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "nginx" deleted
E0214 18:16:10.899701   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Waiting for Get pods {{range.items}}{{.metadata.name}}:{{end}} : expected: busybox0:busybox1:, got: busybox0:busybox1:nginx-f87d999f7-2v2tp:nginx-f87d999f7-t8mh5:nginx-f87d999f7-xp92r:
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0214 18:16:11.428886   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0214 18:16:12.256250   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0214 18:16:14.883060   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581704163-5421", Name:"busybox0", UID:"8d787af8-65eb-44b6-bbbb-123eb6168e31", APIVersion:"v1", ResourceVersion:"1182", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-9tlqq
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0214 18:16:14.888652   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581704163-5421", Name:"busybox1", UID:"c13fc72c-f5e1-49c1-8939-3765a5fb10e5", APIVersion:"v1", ResourceVersion:"1184", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-jrbwc
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(BE0214 18:16:18.144508   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0214 18:16:18.671373   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0214 18:16:19.282764   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581704163-5421", Name:"busybox0", UID:"8d787af8-65eb-44b6-bbbb-123eb6168e31", APIVersion:"v1", ResourceVersion:"1210", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-2pm99
I0214 18:16:19.301653   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581704163-5421", Name:"busybox1", UID:"c13fc72c-f5e1-49c1-8939-3765a5fb10e5", APIVersion:"v1", ResourceVersion:"1215", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-z8dk6
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
I0214 18:16:21.118106   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704163-5421", Name:"nginx1-deployment", UID:"675e1b48-10eb-4f9f-a7d4-48573104d39a", APIVersion:"apps/v1", ResourceVersion:"1232", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0214 18:16:21.121502   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704163-5421", Name:"nginx1-deployment-7bdbbfb5cf", UID:"bf502fbe-b048-41ee-b402-d6e50ec9b8b6", APIVersion:"apps/v1", ResourceVersion:"1233", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-frfdn
I0214 18:16:21.132542   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704163-5421", Name:"nginx1-deployment-7bdbbfb5cf", UID:"bf502fbe-b048-41ee-b402-d6e50ec9b8b6", APIVersion:"apps/v1", ResourceVersion:"1233", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-knhcn
I0214 18:16:21.135590   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704163-5421", Name:"nginx0-deployment", UID:"55117a21-6a92-4de7-8368-25135d517e26", APIVersion:"apps/v1", ResourceVersion:"1234", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
I0214 18:16:21.137647   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704163-5421", Name:"nginx0-deployment-57c6bff7f6", UID:"f3333d4f-e79e-42d0-96c5-345cdfdad587", APIVersion:"apps/v1", ResourceVersion:"1239", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-kdnzj
I0214 18:16:21.143912   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704163-5421", Name:"nginx0-deployment-57c6bff7f6", UID:"f3333d4f-e79e-42d0-96c5-345cdfdad587", APIVersion:"apps/v1", ResourceVersion:"1239", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-x5lhp
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
E0214 18:16:22.848106   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx1-deployment resumed
deployment.apps/nginx0-deployment resumed
E0214 18:16:23.040173   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0214 18:16:25.496348   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581704163-5421", Name:"busybox0", UID:"b6eb2a39-1fdc-425a-ac46-3637d1912808", APIVersion:"v1", ResourceVersion:"1285", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-266hj
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0214 18:16:25.503301   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581704163-5421", Name:"busybox1", UID:"a83e2045-1a08-4c47-bd8f-fc401091b10f", APIVersion:"v1", ResourceVersion:"1287", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-llv2d
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0214 18:16:27] Testing kubectl(v1:namespaces)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1384: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bnamespace "my-namespace" deleted
I0214 18:16:30.926378   55474 horizontal.go:354] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1581704163-5421
I0214 18:16:30.935543   55474 horizontal.go:354] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1581704163-5421
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1393: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
... skipping 28 lines ...
namespace "namespace-1581704101-24563" deleted
namespace "namespace-1581704101-47" deleted
namespace "namespace-1581704103-18263" deleted
namespace "namespace-1581704107-23717" deleted
namespace "namespace-1581704111-22460" deleted
namespace "namespace-1581704163-5421" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1581703896-32671" deleted
... skipping 26 lines ...
namespace "namespace-1581704101-24563" deleted
namespace "namespace-1581704101-47" deleted
namespace "namespace-1581704103-18263" deleted
namespace "namespace-1581704107-23717" deleted
namespace "namespace-1581704111-22460" deleted
namespace "namespace-1581704163-5421" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
E0214 18:16:35.924318   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/quotas created
core.sh:1400: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1401: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
resourcequota/test-quota created (server dry run)
core.sh:1405: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created
core.sh:1408: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: found:
(Bresourcequota "test-quota" deleted
I0214 18:16:38.015668   55474 resource_quota_controller.go:306] Resource quota has been deleted quotas/test-quota
namespace "quotas" deleted
E0214 18:16:42.763824   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1420: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(BE0214 18:16:43.557304   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/other created
E0214 18:16:43.765576   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1424: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1428: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1432: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BI0214 18:16:45.020099   55474 namespace_controller.go:185] Namespace has been deleted my-namespace
core.sh:1434: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1441: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1445: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 127 lines ...
core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
(Bconfigmap "test-configmap" deleted
core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
(Bnamespace/test-configmaps created
core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
(Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(BE0214 18:17:07.373612   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
(Bconfigmap/test-configmap created (dry run)
configmap/test-configmap created (server dry run)
core.sh:46: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(Bconfigmap/test-configmap created
configmap/test-binary-configmap created
... skipping 3 lines ...
configmap "test-binary-configmap" deleted
I0214 18:17:09.836375   55474 namespace_controller.go:185] Namespace has been deleted test-secrets
namespace "test-configmaps" deleted
+++ exit code: 0
Recording: run_client_config_tests
Running command: run_client_config_tests
E0214 18:17:15.207521   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource

+++ Running case: test-cmd.run_client_config_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_client_config_tests
+++ [0214 18:17:15] Creating namespace namespace-1581704235-32296
namespace/namespace-1581704235-32296 created
Context "test" modified.
+++ [0214 18:17:15] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 12 lines ...
core.sh:887: Successful get serviceaccount --namespace=test-service-accounts {{range.items}}{{ if eq .metadata.name \"test-service-account\" }}found{{end}}{{end}}:: :
(BI0214 18:17:19.985845   55474 namespace_controller.go:185] Namespace has been deleted test-configmaps
serviceaccount/test-service-account created
core.sh:891: Successful get serviceaccount/test-service-account --namespace=test-service-accounts {{.metadata.name}}: test-service-account
(Bserviceaccount "test-service-account" deleted
namespace "test-service-accounts" deleted
E0214 18:17:22.560542   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 18:17:23.981218   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_job_tests
Running command: run_job_tests

+++ Running case: test-cmd.run_job_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 19 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 39 lines ...
                job-name=test-job
Annotations:    cronjob.kubernetes.io/instantiate: manual
Controlled By:  CronJob/pi
Parallelism:    1
Completions:    1
Start Time:     Fri, 14 Feb 2020 18:17:31 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=4015ac95-ad01-421a-aedc-544151fbd82b
           job-name=test-job
  Containers:
   pi:
    Image:      k8s.gcr.io/perl
... skipping 314 lines ...
  type: ClusterIP
status:
  loadBalancer: {}
service/redis-master selector updated
core.sh:943: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: padawan:
(Bservice/redis-master selector updated
E0214 18:17:49.208537   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:947: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(BapiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-02-14T18:17:45Z"
  labels:
... skipping 14 lines ...
  selector:
    role: padawan
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
E0214 18:17:49.775649   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2020-02-14T18:17:45Z"
  labels:
    app: redis
... skipping 13 lines ...
  selector:
    role: padawan
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:952: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:965: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:972: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:976: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/redis-master created
... skipping 66 lines ...
apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
(Bdaemonset.apps/bind configured
apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
(Bdaemonset.apps/bind image updated
apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
(Bdaemonset.apps/bind env updated
E0214 18:18:06.805543   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
(Bdaemonset.apps/bind resource requirements updated
apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
(Bdaemonset.apps/bind restarted
apps.sh:48: Successful get daemonsets bind {{.metadata.generation}}: 5
(Bdaemonset.apps "bind" deleted
... skipping 35 lines ...
 (dry run)
daemonset.apps/bind rolled back (server dry run)
apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:85: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:86: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
E0214 18:18:13.819135   55474 daemon_controller.go:292] namespace-1581704288-12680/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1581704288-12680", SelfLink:"/apis/apps/v1/namespaces/namespace-1581704288-12680/daemonsets/bind", UID:"ca955c0a-f800-490f-98d2-c01ee832a4fc", ResourceVersion:"1850", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717301089, loc:(*time.Location)(0x6c69560)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1581704288-12680\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001cb9aa0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002b68578), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0020e06c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001cb9ac0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000e26ae0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002b685cc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:95: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BE0214 18:18:15.252174   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind rolled back
apps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:99: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:100: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
... skipping 33 lines ...
Namespace:    namespace-1581704296-6929
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1581704296-6929
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1581704296-6929
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1581704296-6929
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1581704296-6929
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1581704296-6929
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1581704296-6929
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1581704296-6929
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 15 lines ...
(Bcore.sh:1150: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0214 18:18:21.536282   55474 replica_set.go:200] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1581704296-6929 /api/v1/namespaces/namespace-1581704296-6929/replicationcontrollers/frontend 9faf8615-d03d-4c70-af9e-8c8c2775d32b 1894 2 2020-02-14 18:18:18 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002c4dcf8 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0214 18:18:21.565724   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581704296-6929", Name:"frontend", UID:"9faf8615-d03d-4c70-af9e-8c8c2775d32b", APIVersion:"v1", ResourceVersion:"1894", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-sgsv6
core.sh:1154: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1158: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1162: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1166: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0214 18:18:22.925855   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581704296-6929", Name:"frontend", UID:"9faf8615-d03d-4c70-af9e-8c8c2775d32b", APIVersion:"v1", ResourceVersion:"1901", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-j8qzf
core.sh:1170: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1174: Successful get rc frontend {{.spec.replicas}}: 3
... skipping 31 lines ...
(Bdeployment.apps "nginx-deployment" deleted
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0214 18:18:28.523322   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment", UID:"b0ee2049-4281-4528-b98f-bb6097faafaf", APIVersion:"apps/v1", ResourceVersion:"2010", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0214 18:18:28.531071   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-6986c7bc94", UID:"6d549d3c-38d8-40ce-88fc-2666b043d9df", APIVersion:"apps/v1", ResourceVersion:"2011", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-2zkw4
I0214 18:18:28.541686   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-6986c7bc94", UID:"6d549d3c-38d8-40ce-88fc-2666b043d9df", APIVersion:"apps/v1", ResourceVersion:"2011", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-xxcq2
... skipping 23 lines ...
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
service "frontend-5" deleted
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
Successful
message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 14 lines ...
I0214 18:18:36.675790   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581704296-6929", Name:"frontend", UID:"88ac44a2-ff53-4861-9469-cfab41d0cfa7", APIVersion:"v1", ResourceVersion:"2133", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-plb6j
I0214 18:18:36.676247   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581704296-6929", Name:"frontend", UID:"88ac44a2-ff53-4861-9469-cfab41d0cfa7", APIVersion:"v1", ResourceVersion:"2133", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8bbrk
replicationcontroller/redis-slave created
I0214 18:18:37.132627   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581704296-6929", Name:"redis-slave", UID:"53d96f6e-9fea-49ea-bd61-5304ae2af4ce", APIVersion:"v1", ResourceVersion:"2142", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-scbkl
I0214 18:18:37.176365   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581704296-6929", Name:"redis-slave", UID:"53d96f6e-9fea-49ea-bd61-5304ae2af4ce", APIVersion:"v1", ResourceVersion:"2142", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-kfmq2
core.sh:1299: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(BE0214 18:18:37.588022   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1303: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:redis-slave:
(Breplicationcontroller "frontend" deleted
replicationcontroller "redis-slave" deleted
core.sh:1307: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:1311: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/frontend created
... skipping 4 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1317: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1321: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1330: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 25 lines ...
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
W0214 18:18:41.360113   78934 helpers.go:534] --dry-run is deprecated and can be replaced with --dry-run=client.
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0214 18:18:41.847124   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-resources", UID:"c8e39bea-6085-40f8-8be2-abb6155f7bb7", APIVersion:"apps/v1", ResourceVersion:"2187", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-67f8cfff5 to 3
I0214 18:18:41.855101   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-resources-67f8cfff5", UID:"5571f9d9-0881-4345-aa33-397abc6912db", APIVersion:"apps/v1", ResourceVersion:"2188", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-vt7xw
I0214 18:18:41.895340   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-resources-67f8cfff5", UID:"5571f9d9-0881-4345-aa33-397abc6912db", APIVersion:"apps/v1", ResourceVersion:"2188", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-z6wt5
I0214 18:18:41.895989   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-resources-67f8cfff5", UID:"5571f9d9-0881-4345-aa33-397abc6912db", APIVersion:"apps/v1", ResourceVersion:"2188", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-27cvs
core.sh:1336: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1337: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0214 18:18:42.861045   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-resources", UID:"c8e39bea-6085-40f8-8be2-abb6155f7bb7", APIVersion:"apps/v1", ResourceVersion:"2202", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-55c547f795 to 1
I0214 18:18:42.868177   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-resources-55c547f795", UID:"b99f3ddd-f801-4afc-b4fa-77ef1248d020", APIVersion:"apps/v1", ResourceVersion:"2203", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-55c547f795-f6q5d
core.sh:1341: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1342: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0214 18:18:43.769156   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-resources", UID:"c8e39bea-6085-40f8-8be2-abb6155f7bb7", APIVersion:"apps/v1", ResourceVersion:"2214", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 2
I0214 18:18:43.778105   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-resources", UID:"c8e39bea-6085-40f8-8be2-abb6155f7bb7", APIVersion:"apps/v1", ResourceVersion:"2216", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6d86564b45 to 1
I0214 18:18:43.785653   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-resources-67f8cfff5", UID:"5571f9d9-0881-4345-aa33-397abc6912db", APIVersion:"apps/v1", ResourceVersion:"2218", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-vt7xw
I0214 18:18:43.789704   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-resources-6d86564b45", UID:"ac4da164-4c2d-44db-bfbc-f2ab5464898f", APIVersion:"apps/v1", ResourceVersion:"2221", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6d86564b45-gq2xd
E0214 18:18:43.831933   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1347: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0214 18:18:44.455588   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-resources", UID:"c8e39bea-6085-40f8-8be2-abb6155f7bb7", APIVersion:"apps/v1", ResourceVersion:"2235", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-67f8cfff5 to 1
I0214 18:18:44.464661   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-resources", UID:"c8e39bea-6085-40f8-8be2-abb6155f7bb7", APIVersion:"apps/v1", ResourceVersion:"2237", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6c478d4fdb to 1
I0214 18:18:44.465047   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704296-6929", Name:"nginx-deployment-resources-67f8cfff5", UID:"5571f9d9-0881-4345-aa33-397abc6912db", APIVersion:"apps/v1", ResourceVersion:"2239", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-67f8cfff5-27cvs
... skipping 76 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
E0214 18:18:45.810217   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1357: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1358: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
(Bdeployment.apps "nginx-deployment-resources" deleted
+++ exit code: 0
Recording: run_deployment_tests
... skipping 43 lines ...
                pod-template-hash=79b9bd9585
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=79b9bd9585
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 102 lines ...
I0214 18:19:01.895738   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704326-19907", Name:"nginx-78487f9fd7", UID:"4d28199a-24f5-4416-a7a6-dabb47bc880f", APIVersion:"apps/v1", ResourceVersion:"2425", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-78487f9fd7-hskrw
apps.sh:301: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(B    Image:	k8s.gcr.io/nginx:test-cmd
deployment.apps/nginx rolled back (server dry run)
apps.sh:305: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
E0214 18:19:03.644293   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:309: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:312: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
apps.sh:316: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0214 18:19:08.730114   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704326-19907", Name:"nginx", UID:"cbad9ce4-416d-42aa-b2ca-52803618cbf1", APIVersion:"apps/v1", ResourceVersion:"2459", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-f87d999f7 to 2
I0214 18:19:08.736880   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704326-19907", Name:"nginx", UID:"cbad9ce4-416d-42aa-b2ca-52803618cbf1", APIVersion:"apps/v1", ResourceVersion:"2461", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7f4f989544 to 1
I0214 18:19:08.739990   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704326-19907", Name:"nginx-f87d999f7", UID:"79118ccc-cc37-4632-8164-8b779af2ec1e", APIVersion:"apps/v1", ResourceVersion:"2463", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-f87d999f7-286mz
I0214 18:19:08.740548   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704326-19907", Name:"nginx-7f4f989544", UID:"9140e02d-440c-4afd-bdfe-63d08e8959f8", APIVersion:"apps/v1", ResourceVersion:"2466", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7f4f989544-nghsn
Successful
... skipping 74 lines ...
(Bapps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated (dry run)
deployment.apps/nginx-deployment image updated (server dry run)
I0214 18:19:13.999053   55474 horizontal.go:354] Horizontal Pod Autoscaler nginx-deployment has been deleted in namespace-1581704326-19907
apps.sh:355: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(BE0214 18:19:14.554220   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment image updated
I0214 18:19:14.616376   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704326-19907", Name:"nginx-deployment", UID:"f01c5a3d-ba3c-4d9e-9528-e03b511f1cc4", APIVersion:"apps/v1", ResourceVersion:"2535", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-59df9b5f5b to 1
I0214 18:19:14.620894   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704326-19907", Name:"nginx-deployment-59df9b5f5b", UID:"94770cd1-731a-41b7-b61a-0b1db0142e52", APIVersion:"apps/v1", ResourceVersion:"2536", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-59df9b5f5b-qsb68
apps.sh:359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:360: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:369: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
... skipping 7 lines ...
apps.sh:377: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:378: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:381: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:382: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps "nginx-deployment" deleted
apps.sh:388: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0214 18:19:19.697176   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment created
I0214 18:19:20.074120   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704326-19907", Name:"nginx-deployment", UID:"0889c636-3c96-47b6-8d57-54d71d5736bc", APIVersion:"apps/v1", ResourceVersion:"2592", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-598d4d68b4 to 3
I0214 18:19:20.085839   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704326-19907", Name:"nginx-deployment-598d4d68b4", UID:"863ea7fa-e023-4ebb-8e09-0450966cfeaf", APIVersion:"apps/v1", ResourceVersion:"2593", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-wv2m7
I0214 18:19:20.092041   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704326-19907", Name:"nginx-deployment-598d4d68b4", UID:"863ea7fa-e023-4ebb-8e09-0450966cfeaf", APIVersion:"apps/v1", ResourceVersion:"2593", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-c5hdv
I0214 18:19:20.092338   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704326-19907", Name:"nginx-deployment-598d4d68b4", UID:"863ea7fa-e023-4ebb-8e09-0450966cfeaf", APIVersion:"apps/v1", ResourceVersion:"2593", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-598d4d68b4-z9kq6
configmap/test-set-env-config created
... skipping 38 lines ...
deployment.apps/nginx-deployment env updated
I0214 18:19:25.959664   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704326-19907", Name:"nginx-deployment", UID:"0889c636-3c96-47b6-8d57-54d71d5736bc", APIVersion:"apps/v1", ResourceVersion:"2728", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-c6d5c5c7b to 0
I0214 18:19:25.971155   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704326-19907", Name:"nginx-deployment-c6d5c5c7b", UID:"f825ec30-3bed-4384-8143-513b27cd6773", APIVersion:"apps/v1", ResourceVersion:"2732", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-c6d5c5c7b-7v6gs
I0214 18:19:26.111139   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704326-19907", Name:"nginx-deployment", UID:"0889c636-3c96-47b6-8d57-54d71d5736bc", APIVersion:"apps/v1", ResourceVersion:"2730", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-868b664cb5 to 1
I0214 18:19:26.119688   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704326-19907", Name:"nginx-deployment-868b664cb5", UID:"c7bde0e1-0fb6-4b1a-9a3d-1c3e80340d2b", APIVersion:"apps/v1", ResourceVersion:"2739", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-868b664cb5-5b8bx
deployment.apps "nginx-deployment" deleted
E0214 18:19:26.214476   55474 replica_set.go:535] sync "namespace-1581704326-19907/nginx-deployment-868b664cb5" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-868b664cb5": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1581704326-19907/nginx-deployment-868b664cb5, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: c7bde0e1-0fb6-4b1a-9a3d-1c3e80340d2b, UID in object meta: 
configmap "test-set-env-config" deleted
secret "test-set-env-secret" deleted
+++ exit code: 0
Recording: run_rs_tests
Running command: run_rs_tests

... skipping 44 lines ...
Namespace:    namespace-1581704367-24004
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1581704367-24004
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1581704367-24004
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1581704367-24004
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1581704367-24004
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1581704367-24004
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1581704367-24004
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1581704367-24004
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 112 lines ...
(Bdeployment.apps/scale-1 created
I0214 18:19:36.212236   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704367-24004", Name:"scale-1", UID:"59b97a80-a3cc-4ba4-bbb8-11f14f9ddb81", APIVersion:"apps/v1", ResourceVersion:"2830", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5c5565bcd9 to 1
I0214 18:19:36.221890   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704367-24004", Name:"scale-1-5c5565bcd9", UID:"e0971c06-80e1-4bd4-a601-c9bbe51d57b2", APIVersion:"apps/v1", ResourceVersion:"2831", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5c5565bcd9-qxmnp
deployment.apps/scale-2 created
I0214 18:19:36.732209   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704367-24004", Name:"scale-2", UID:"b9f985ae-653b-4de1-abc0-a06d33ac8887", APIVersion:"apps/v1", ResourceVersion:"2840", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5c5565bcd9 to 1
I0214 18:19:36.738243   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704367-24004", Name:"scale-2-5c5565bcd9", UID:"8f1c2940-8565-4308-b73c-8b1e58f9e3fb", APIVersion:"apps/v1", ResourceVersion:"2841", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5c5565bcd9-kdng4
E0214 18:19:36.838084   55474 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/scale-3 created
I0214 18:19:37.208877   55474 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581704367-24004", Name:"scale-3", UID:"dd54041d-89c1-4aee-bac9-f95c2cc31191", APIVersion:"apps/v1", ResourceVersion:"2850", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-3-5c5565bcd9 to 1
I0214 18:19:37.215649   55474 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581704367-24004", Name:"scale-3-5c5565bcd9", UID:"6b526732-7e99-4a42-98b8-3e27b577993b", APIVersion:"apps/v1", ResourceVersion:"2851", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5c5565bcd9-qg8ph
apps.sh:596: Successful get deploy scale-1 {{.spec.replicas}}: 1
(Bapps.sh:597: Successful get deploy scale-2 {{.spec.replicas}}: 1
(Bapps.sh:598: Successful get deploy scale-3 {{.spec.replicas}}: 1
... skipping 74 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:680: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:684: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 3 lines ...
namespace/namespace-1581704392-3835 created
Context "test" modified.
+++ [0214 18:19:53] Testing kubectl(v1:statefulsets)
apps.sh:492: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0214 18:19:53.818118   51991 controller.go:606] quota admission added evaluator for: statefulsets.apps
statefulset.apps/nginx created
{"component":"entrypoint","file":"prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","time":"2020-02-14T18:19:53Z"}