This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 0 failed / 62 succeeded
Started2021-10-20 22:44
Elapsed16m23s
Revisionmaster

No Test Failures!


Show 62 Passed Tests

Error lines from build-log.txt

... skipping 76 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 157: bogus-expected-to-fail: command not found
!!! [1020 22:49:41] Call tree:
!!! [1020 22:49:41]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [1020 22:49:41]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [1020 22:49:41]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:133 juLog(...)
!!! [1020 22:49:41]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:161 record_command(...)
!!! [1020 22:49:41]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [1020 22:49:41] Running kubeadm tests
+++ [1020 22:49:45] Building go targets for linux/amd64:
    cmd/kubeadm
> static build CGO_ENABLED=0: k8s.io/kubernetes/cmd/kubeadm
+++ [1020 22:50:38] Running tests without code coverage 
{"Time":"2021-10-20T22:51:23.379734554Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t43.315s\n"}
... skipping 202 lines ...
+++ [1020 22:53:42] Building go targets for linux/amd64:
    cmd/kube-controller-manager
> static build CGO_ENABLED=0: k8s.io/kubernetes/cmd/kube-controller-manager
+++ [1020 22:54:09] Generate kubeconfig for controller-manager
+++ [1020 22:54:09] Starting controller-manager
I1020 22:54:09.657718   56921 serving.go:348] Generated self-signed cert in-memory
W1020 22:54:10.691749   56921 authentication.go:419] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W1020 22:54:10.691785   56921 authentication.go:316] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W1020 22:54:10.691792   56921 authentication.go:340] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W1020 22:54:10.691804   56921 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W1020 22:54:10.691826   56921 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I1020 22:54:10.691849   56921 controllermanager.go:188] Version: v1.23.0-alpha.3.453+c269494ebc4337
I1020 22:54:10.693225   56921 secure_serving.go:200] Serving securely on [::]:10257
I1020 22:54:10.693355   56921 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I1020 22:54:10.693640   56921 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
+++ [1020 22:54:10] On try 2, controller-manager: ok
... skipping 144 lines ...
I1020 22:54:10.810702   56921 controllermanager.go:597] Started "disruption"
I1020 22:54:10.810836   56921 disruption.go:363] Starting disruption controller
I1020 22:54:10.810857   56921 shared_informer.go:240] Waiting for caches to sync for disruption
I1020 22:54:10.810978   56921 controllermanager.go:597] Started "cronjob"
I1020 22:54:10.811073   56921 cronjob_controllerv2.go:125] "Starting cronjob controller v2"
I1020 22:54:10.811154   56921 shared_informer.go:240] Waiting for caches to sync for cronjob
E1020 22:54:10.811468   56921 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W1020 22:54:10.811484   56921 controllermanager.go:575] Skipping "service"
W1020 22:54:10.812063   56921 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I1020 22:54:10.812121   56921 controllermanager.go:597] Started "persistentvolume-binder"
I1020 22:54:10.812290   56921 pv_controller_base.go:308] Starting persistent volume controller
I1020 22:54:10.812314   56921 shared_informer.go:240] Waiting for caches to sync for persistent volume
I1020 22:54:10.812474   56921 controllermanager.go:597] Started "ttl-after-finished"
I1020 22:54:10.812675   56921 ttlafterfinished_controller.go:109] Starting TTL after finished controller
I1020 22:54:10.812718   56921 shared_informer.go:240] Waiting for caches to sync for TTL after finished
I1020 22:54:10.812818   56921 controllermanager.go:597] Started "daemonset"
I1020 22:54:10.812983   56921 daemon_controller.go:284] Starting daemon sets controller
I1020 22:54:10.813004   56921 shared_informer.go:240] Waiting for caches to sync for daemon sets
I1020 22:54:10.812993   56921 node_lifecycle_controller.go:76] Sending events to api server
E1020 22:54:10.813047   56921 core.go:211] failed to start cloud node lifecycle controller: no cloud provider provided
W1020 22:54:10.813058   56921 controllermanager.go:575] Skipping "cloud-node-lifecycle"
I1020 22:54:10.813334   56921 controllermanager.go:597] Started "pv-protection"
I1020 22:54:10.816121   56921 shared_informer.go:240] Waiting for caches to sync for resource quota
I1020 22:54:10.816255   56921 pv_protection_controller.go:83] Starting PV protection controller
I1020 22:54:10.816286   56921 shared_informer.go:240] Waiting for caches to sync for PV protection
W1020 22:54:10.851683   56921 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
... skipping 38 lines ...
I1020 22:54:11.211005   56921 shared_informer.go:247] Caches are synced for disruption 
I1020 22:54:11.211018   56921 disruption.go:371] Sending events to api server.
I1020 22:54:11.213296   56921 shared_informer.go:247] Caches are synced for persistent volume 
I1020 22:54:11.213343   56921 shared_informer.go:247] Caches are synced for daemon sets 
I1020 22:54:11.216571   56921 shared_informer.go:247] Caches are synced for resource quota 
node/127.0.0.1 created
W1020 22:54:11.508983   56921 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
+++ [1020 22:54:11] Checking kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.0-alpha.3.453+c269494ebc4337", GitCommit:"c269494ebc43378b686dafa277feb1c42acee531", GitTreeState:"clean", BuildDate:"2021-10-20T22:16:16Z", GoVersion:"go1.17.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.0-alpha.3.453+c269494ebc4337", GitCommit:"c269494ebc43378b686dafa277feb1c42acee531", GitTreeState:"clean", BuildDate:"2021-10-20T22:16:16Z", GoVersion:"go1.17.2", Compiler:"gc", Platform:"linux/amd64"}
I1020 22:54:11.654848   56921 shared_informer.go:247] Caches are synced for garbage collector 
I1020 22:54:11.700419   56921 shared_informer.go:247] Caches are synced for garbage collector 
I1020 22:54:11.700467   56921 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   34s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 100 lines ...
I1020 22:54:16.197283   56921 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: 
I1020 22:54:16.197448   56921 node_lifecycle_controller.go:1163] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I1020 22:54:16.197529   56921 event.go:294] "Event occurred" object="127.0.0.1" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller"
namespace/namespace-1634770456-1115 created
Context "test" modified.
+++ [1020 22:54:16] Testing RESTMapper
+++ [1020 22:54:16] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 61 lines ...
namespace/namespace-1634770464-30260 created
Context "test" modified.
+++ [1020 22:54:24] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 152 lines ...
namespace/namespace-1634770472-30887 created
Context "test" modified.
+++ [1020 22:54:32] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 440 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:214: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 30 lines ...
I1020 22:54:45.314046   61664 round_trippers.go:541] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 1 milliseconds
I1020 22:54:45.315770   61664 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3D627647fb-0edc-462c-80cb-e67df998a189%2CinvolvedObject.name%3Dtest-pdb-2&limit=500 200 OK in 1 milliseconds
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 240 lines ...
core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.6:
(BSuccessful
message:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [1020 22:55:02] "kubectl patch with resourceVersion 604" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:kubectl-replace
has:kubectl-replace
Successful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W1020 22:55:03.049376   56921 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
core.sh:655: Successful get node node-v1-test {{.metadata.annotations.a}}: b
... skipping 29 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:3.6
    name: kubernetes-pause
has:localonlyvalue
core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 83 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [1020 22:55:12] Creating namespace namespace-1634770512-27474
namespace/namespace-1634770512-27474 created
Context "test" modified.
+++ [1020 22:55:13] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 43 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [1020 22:55:13] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

+++ Running case: test-cmd.run_kubectl_apply_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 29 lines ...
I1020 22:55:16.179292   56921 event.go:294] "Event occurred" object="namespace-1634770513-12060/test-deployment-retainkeys-8695b756f8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-8695b756f8-85fvr"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/test-pod created (dry run)
pod/test-pod created (server dry run)
apply.sh:107: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 27 lines ...
(Bpod/b created
apply.sh:207: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:208: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
pod/a created
pod/b created
service/prune-svc created
I1020 22:55:27.369020   56921 horizontal.go:361] Horizontal Pod Autoscaler frontend has been deleted in namespace-1634770510-23010
apply.sh:220: Successful get pods a {{.metadata.name}}: a
... skipping 35 lines ...
apply.sh:261: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod/b unchanged
pod/a pruned
apply.sh:265: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b:
(Bnamespace "nsb" deleted
Successful
message:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:276: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:280: Successful get services a {{.metadata.name}}: a
(BSuccessful
message:The Service "a" is invalid: spec.clusterIPs[0]: Invalid value: []string{"10.0.0.12"}: may not change once set
... skipping 26 lines ...
apply.sh:302: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:303: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
message:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:311: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
message:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:319: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
apply.sh:325: Successful get configmaps --field-selector=metadata.name=foo {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:configmap/foo created
error: unable to recognize "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:331: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:337: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:pod/pod-a created
... skipping 5 lines ...
(Bpod "pod-a" deleted
pod "pod-c" deleted
apply.sh:345: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:349: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: unable to recognize "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
has:no matches for kind "Widget" in version "example.com/v1"
Successful
message:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:355: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI1020 22:56:06.767679   56921 namespace_controller.go:185] Namespace has been deleted multi-resource-ns
I1020 22:56:07.026103   53374 controller.go:611] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
... skipping 32 lines ...
message:883
has:883
pod "test-pod" deleted
apply.sh:414: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ [1020 22:56:09] Testing upgrade kubectl client-side apply to server-side apply
pod/test-pod created
error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
... skipping 77 lines ...
run.sh:32: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Brun.sh:35: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/nginx-extensions created
run.sh:39: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: nginx-extensions:
(Bpod "nginx-extensions" deleted
W1020 22:56:12.871473   53374 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E1020 22:56:12.873040   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [1020 22:56:13] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

+++ Running case: test-cmd.run_kubectl_apply_deployments_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_apply_deployments_tests
+++ [1020 22:56:13] Creating namespace namespace-1634770573-15886
namespace/namespace-1634770573-15886 created
Context "test" modified.
+++ [1020 22:56:13] Testing kubectl apply deployments
E1020 22:56:14.009166   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:121: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:122: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:123: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/my-depl created
I1020 22:56:14.482639   56921 event.go:294] "Event occurred" object="namespace-1634770573-15886/my-depl" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set my-depl-84fb47b469 to 1"
I1020 22:56:14.499963   56921 event.go:294] "Event occurred" object="namespace-1634770573-15886/my-depl-84fb47b469" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: my-depl-84fb47b469-mkqsj"
... skipping 15 lines ...
(Bdeployment.apps/nginx created
I1020 22:56:15.910021   56921 event.go:294] "Event occurred" object="namespace-1634770573-15886/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-9bb9c4878 to 3"
I1020 22:56:15.916786   56921 event.go:294] "Event occurred" object="namespace-1634770573-15886/nginx-9bb9c4878" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-9bb9c4878-5j69z"
I1020 22:56:15.923794   56921 event.go:294] "Event occurred" object="namespace-1634770573-15886/nginx-9bb9c4878" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-9bb9c4878-97dg8"
I1020 22:56:15.923998   56921 event.go:294] "Event occurred" object="namespace-1634770573-15886/nginx-9bb9c4878" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-9bb9c4878-rvrj4"
apps.sh:154: Successful get deployment nginx {{.metadata.name}}: nginx
(BE1020 22:56:16.571823   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1634770573-15886\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1634770573-15886"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
E1020 22:56:21.883156   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx configured
I1020 22:56:24.413874   56921 event.go:294] "Event occurred" object="namespace-1634770573-15886/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-6dd6cfdb57 to 3"
I1020 22:56:24.424202   56921 event.go:294] "Event occurred" object="namespace-1634770573-15886/nginx-6dd6cfdb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6dd6cfdb57-gjslw"
I1020 22:56:24.431169   56921 event.go:294] "Event occurred" object="namespace-1634770573-15886/nginx-6dd6cfdb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6dd6cfdb57-r78kw"
I1020 22:56:24.432204   56921 event.go:294] "Event occurred" object="namespace-1634770573-15886/nginx-6dd6cfdb57" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6dd6cfdb57-rgxkt"
Successful
... skipping 80 lines ...
+status:
+  phase: Pending
+  qosClass: BestEffort
has:test-pod
diff.sh:33: Successful get pod {{range.items}}{{ if eq .metadata.name \"test-pod\" }}found{{end}}{{end}}:: :
(Bpod/test-pod created
E1020 22:56:29.611994   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
diff.sh:36: Successful get pod {{range.items}}{{ if eq .metadata.name \"test-pod\" }}found{{end}}{{end}}:: found:
(BSuccessful
message:1024
has:1024
Successful
message:diff -u -N /tmp/LIVE-1132948015/v1.Pod.namespace-1634770589-16616.test-pod /tmp/MERGED-1619126769/v1.Pod.namespace-1634770589-16616.test-pod
... skipping 208 lines ...
+++ [1020 22:56:31] Creating namespace namespace-1634770591-21862
namespace/namespace-1634770591-21862 created
Context "test" modified.
+++ [1020 22:56:32] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1634770591-21862 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1634770591-21862 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I1020 22:56:34.066238   68934 loader.go:372] Config loaded from file:  /tmp/tmp.esUsaGxHM9/.kube/config
I1020 22:56:34.071902   68934 round_trippers.go:541] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds
I1020 22:56:34.102928   68934 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I1020 22:56:34.104625   68934 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 598 lines ...
I1020 22:56:41.692059   56921 shared_informer.go:247] Caches are synced for garbage collector 
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2021-10-20T22:56:41Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2021-10-20T22:56:41Z"}}, "name":"valid-pod", "namespace":"namespace-1634770601-19661", "resourceVersion":"1049", "uid":"c69d37c4-18a8-4191-aaf3-668c6c45a458"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "preemptionPolicy":"PreemptLowerPriority", "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2021-10-20T22:56:41Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2021-10-20T22:56:41Z"}],"name":"valid-pod","namespace":"namespace-1634770601-19661","resourceVersion":"1049","uid":"c69d37c4-18a8-4191-aaf3-668c6c45a458"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2021-10-20T22:56:41Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2021-10-20T22:56:41Z]] name:valid-pod namespace:namespace-1634770601-19661 resourceVersion:1049 uid:c69d37c4-18a8-4191-aaf3-668c6c45a458] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true preemptionPolicy:PreemptLowerPriority priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
... skipping 84 lines ...
  terminationGracePeriodSeconds: 30
status:
  phase: Pending
  qosClass: Guaranteed
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
message:redis-master valid-pod
has:redis-master valid-pod
pod "redis-master" deleted
pod "valid-pod" deleted
get.sh:210: Successful get configmaps --field-selector=metadata.name=test-the-map {{range.items}}{{.metadata.name}}:{{end}}: 
(Bget.sh:211: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bget.sh:212: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1020 22:56:47.213986   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
configmap/test-the-map created
service/test-the-service created
deployment.apps/test-the-deployment created
I1020 22:56:47.568458   56921 event.go:294] "Event occurred" object="namespace-1634770601-19661/test-the-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-the-deployment-747c788cf to 3"
I1020 22:56:47.597457   56921 event.go:294] "Event occurred" object="namespace-1634770601-19661/test-the-deployment-747c788cf" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-747c788cf-vpw2m"
I1020 22:56:47.608286   56921 event.go:294] "Event occurred" object="namespace-1634770601-19661/test-the-deployment-747c788cf" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-the-deployment-747c788cf-flb7q"
... skipping 23 lines ...
+++ [1020 22:56:50] Creating namespace namespace-1634770610-14982
namespace/namespace-1634770610-14982 created
Context "test" modified.
+++ [1020 22:56:50] Testing kubectl exec POD COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [1020 22:56:50] Creating namespace namespace-1634770610-10709
namespace/namespace-1634770610-10709 created
Context "test" modified.
+++ [1020 22:56:50] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I1020 22:56:51.671910   56921 event.go:294] "Event occurred" object="namespace-1634770610-10709/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-h7h8m"
I1020 22:56:51.683686   56921 event.go:294] "Event occurred" object="namespace-1634770610-10709/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-p7zcc"
I1020 22:56:51.683726   56921 event.go:294] "Event occurred" object="namespace-1634770610-10709/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-tgbxv"
configmap/test-set-env-config created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-h7h8m does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-h7h8m does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"07bc40fe-1ba0-4714-8ef5-e3e71a2c769d","resourceVersion":"1129","creationTimestamp":"2021-10-20T22:56:52Z"}}
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"07bc40fe-1ba0-4714-8ef5-e3e71a2c769d","resourceVersion":"1130","creationTimestamp":"2021-10-20T22:56:52Z"},"data":{"key1":"config1"}}
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"07bc40fe-1ba0-4714-8ef5-e3e71a2c769d","resourceVersion":"1130","creationTimestamp":"2021-10-20T22:56:52Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"07bc40fe-1ba0-4714-8ef5-e3e71a2c769d"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 73 lines ...
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:apps/v1beta1
deployment.apps "nginx" deleted
Successful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
Successful
message:nginx:
has:nginx:
+++ exit code: 0
Recording: run_kubectl_delete_allnamespaces_tests
... skipping 104 lines ...
has:Timeout
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 151 lines ...
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:282: Successful get foos/test {{.patched}}: value2
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:284: Successful get foos/test {{.patched}}: <no value>
(B+++ [1020 22:57:06] "kubectl patch --local" returns error as expected for CustomResource: error: strategic merge patch is not supported for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 323 lines ...
(Bcrd.sh:505: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:510: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:513: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
+++ [1020 22:57:23] Testing recursive resources
... skipping 2 lines ...
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW1020 22:57:24.150484   53374 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E1020 22:57:24.151909   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
W1020 22:57:24.248194   53374 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E1020 22:57:24.249886   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW1020 22:57:24.456171   53374 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E1020 22:57:24.457704   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:Name:         busybox0
Namespace:    namespace-1634770643-28259
Priority:     0
Node:         <none>
... skipping 159 lines ...
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1020 22:57:25.200057   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1020 22:57:25.272174   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1020 22:57:25.300651   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:Warning: resource pods/busybox0 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox0 configured
Warning: resource pods/busybox1 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:264: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:273: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:278: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:283: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:288: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:293: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:297: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:302: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1020 22:57:26.658383   56921 event.go:294] "Event occurred" object="namespace-1634770643-28259/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-2qs2r"
I1020 22:57:26.665171   56921 event.go:294] "Event occurred" object="namespace-1634770643-28259/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-r2llb"
generic-resources.sh:306: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE1020 22:57:26.819249   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:311: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:312: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:313: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:318: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BE1020 22:57:27.232930   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:319: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:328: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:329: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI1020 22:57:27.849977   56921 namespace_controller.go:185] Namespace has been deleted non-native-resources
generic-resources.sh:333: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:334: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:340: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:341: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:342: Successful get rc busybox1 {{.spec.replicas}}: 1
(BE1020 22:57:28.348522   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1020 22:57:28.383933   56921 event.go:294] "Event occurred" object="namespace-1634770643-28259/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-5p948"
I1020 22:57:28.405377   56921 event.go:294] "Event occurred" object="namespace-1634770643-28259/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-tq6br"
generic-resources.sh:346: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:347: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:356: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:361: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1020 22:57:29.089147   56921 event.go:294] "Event occurred" object="namespace-1634770643-28259/nginx1-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx1-deployment-758b5949b6 to 2"
I1020 22:57:29.096008   56921 event.go:294] "Event occurred" object="namespace-1634770643-28259/nginx0-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx0-deployment-75db9cdfd9 to 2"
I1020 22:57:29.096100   56921 event.go:294] "Event occurred" object="namespace-1634770643-28259/nginx1-deployment-758b5949b6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-758b5949b6-z9vfl"
I1020 22:57:29.103074   56921 event.go:294] "Event occurred" object="namespace-1634770643-28259/nginx0-deployment-75db9cdfd9" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-75db9cdfd9-vljj6"
I1020 22:57:29.103223   56921 event.go:294] "Event occurred" object="namespace-1634770643-28259/nginx1-deployment-758b5949b6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-758b5949b6-7dshx"
I1020 22:57:29.109661   56921 event.go:294] "Event occurred" object="namespace-1634770643-28259/nginx0-deployment-75db9cdfd9" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-75db9cdfd9-fd8cv"
generic-resources.sh:365: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:378: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
E1020 22:57:30.505040   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:400: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I1020 22:57:31.263999   56921 event.go:294] "Event occurred" object="namespace-1634770643-28259/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-gzdzv"
I1020 22:57:31.271121   56921 event.go:294] "Event occurred" object="namespace-1634770643-28259/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-mxsbk"
generic-resources.sh:404: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
... skipping 3 lines ...
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [1020 22:57:32] Testing kubectl(v1:namespaces)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
E1020 22:57:33.094458   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace created
core.sh:1471: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bquery for namespaces had limit param
query for resourcequotas had limit param
query for limitranges had limit param
query for namespaces had user-specified limit param
... skipping 122 lines ...
I1020 22:57:33.513068   74100 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1634770621-28758/resourcequotas?limit=500 200 OK in 1 milliseconds
I1020 22:57:33.514315   74100 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1634770621-28758/limitranges?limit=500 200 OK in 1 milliseconds
I1020 22:57:33.515870   74100 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1634770643-28259 200 OK in 1 milliseconds
I1020 22:57:33.517229   74100 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1634770643-28259/resourcequotas?limit=500 200 OK in 1 milliseconds
I1020 22:57:33.518480   74100 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1634770643-28259/limitranges?limit=500 200 OK in 1 milliseconds
(Bnamespace "my-namespace" deleted
E1020 22:57:34.460637   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1020 22:57:37.440907   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1482: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
... skipping 31 lines ...
namespace "namespace-1634770616-18279" deleted
namespace "namespace-1634770616-28441" deleted
namespace "namespace-1634770618-28704" deleted
namespace "namespace-1634770620-3895" deleted
namespace "namespace-1634770621-28758" deleted
namespace "namespace-1634770643-28259" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1634770452-4229" deleted
... skipping 29 lines ...
namespace "namespace-1634770616-18279" deleted
namespace "namespace-1634770616-28441" deleted
namespace "namespace-1634770618-28704" deleted
namespace "namespace-1634770620-3895" deleted
namespace "namespace-1634770621-28758" deleted
namespace "namespace-1634770643-28259" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1489: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1490: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
resourcequota/test-quota created (server dry run)
... skipping 13 lines ...
I1020 22:57:41.464427   56921 shared_informer.go:240] Waiting for caches to sync for resource quota
I1020 22:57:41.464468   56921 shared_informer.go:247] Caches are synced for resource quota 
I1020 22:57:41.709516   56921 shared_informer.go:240] Waiting for caches to sync for garbage collector
I1020 22:57:41.709571   56921 shared_informer.go:247] Caches are synced for garbage collector 
I1020 22:57:42.079113   56921 horizontal.go:361] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1634770643-28259
I1020 22:57:42.087288   56921 horizontal.go:361] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1634770643-28259
E1020 22:57:43.291050   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1020 22:57:44.539184   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1020 22:57:45.271899   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1511: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1515: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1519: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1523: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1525: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1532: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1536: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 124 lines ...
(Bcore.sh:921: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:930: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
I1020 22:57:57.820215   56921 namespace_controller.go:185] Namespace has been deleted other
E1020 22:58:01.260452   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 4 lines ...
+++ [1020 22:58:01] Testing configmaps
configmap/test-configmap created
core.sh:28: Successful get configmap/test-configmap {{.metadata.name}}: test-configmap
(Bconfigmap "test-configmap" deleted
core.sh:33: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-configmaps\" }}found{{end}}{{end}}:: :
(Bnamespace/test-configmaps created
E1020 22:58:02.067861   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:37: Successful get namespaces/test-configmaps {{.metadata.name}}: test-configmaps
(Bcore.sh:41: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
(Bcore.sh:42: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-binary-configmap\" }}found{{end}}{{end}}:: :
(Bconfigmap/test-configmap created (dry run)
configmap/test-configmap created (server dry run)
core.sh:46: Successful get configmaps {{range.items}}{{ if eq .metadata.name \"test-configmap\" }}found{{end}}{{end}}:: :
... skipping 14 lines ...
I1020 22:58:03.090619   75501 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/events?fieldSelector=involvedObject.name%3Dtest-binary-configmap%2CinvolvedObject.namespace%3Dtest-configmaps%2CinvolvedObject.kind%3DConfigMap%2CinvolvedObject.uid%3D429d3edc-0bdf-4e44-b109-d14a7226ddaf&limit=500 200 OK in 1 milliseconds
I1020 22:58:03.092323   75501 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/configmaps/test-configmap 200 OK in 1 milliseconds
I1020 22:58:03.093831   75501 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/events?fieldSelector=involvedObject.namespace%3Dtest-configmaps%2CinvolvedObject.kind%3DConfigMap%2CinvolvedObject.uid%3D214446e3-249c-4d64-bce6-62324e9c1b39%2CinvolvedObject.name%3Dtest-configmap&limit=500 200 OK in 1 milliseconds
(Bconfigmap "test-configmap" deleted
configmap "test-binary-configmap" deleted
namespace "test-configmaps" deleted
E1020 22:58:04.922764   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1020 22:58:06.307422   56921 namespace_controller.go:185] Namespace has been deleted test-secrets
+++ exit code: 0
Recording: run_client_config_tests
Running command: run_client_config_tests

+++ Running case: test-cmd.run_client_config_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_client_config_tests
+++ [1020 22:58:08] Creating namespace namespace-1634770688-29832
namespace/namespace-1634770688-29832 created
Context "test" modified.
+++ [1020 22:58:08] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 25 lines ...
I1020 22:58:10.067565   75961 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/events?fieldSelector=involvedObject.name%3Ddefault%2CinvolvedObject.namespace%3Dtest-service-accounts%2CinvolvedObject.kind%3DServiceAccount%2CinvolvedObject.uid%3D7408d81b-eaf2-4b33-91b4-60a9689b6111&limit=500 200 OK in 1 milliseconds
I1020 22:58:10.070267   75961 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/serviceaccounts/test-service-account 200 OK in 1 milliseconds
I1020 22:58:10.071592   75961 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/secrets?limit=500 200 OK in 1 milliseconds
I1020 22:58:10.073074   75961 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/test-service-accounts/events?fieldSelector=involvedObject.namespace%3Dtest-service-accounts%2CinvolvedObject.kind%3DServiceAccount%2CinvolvedObject.uid%3Daf9b75c9-c3a4-4b36-b0e3-8b7802400d5d%2CinvolvedObject.name%3Dtest-service-account&limit=500 200 OK in 1 milliseconds
(Bserviceaccount "test-service-account" deleted
namespace "test-service-accounts" deleted
E1020 22:58:11.336479   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I1020 22:58:13.502724   56921 namespace_controller.go:185] Namespace has been deleted test-configmaps
+++ exit code: 0
Recording: run_job_tests
Running command: run_job_tests

+++ Running case: test-cmd.run_job_tests 
... skipping 21 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 54 lines ...
Annotations:      batch.kubernetes.io/job-tracking: 
                  cronjob.kubernetes.io/instantiate: manual
Parallelism:      1
Completions:      1
Completion Mode:  NonIndexed
Start Time:       Wed, 20 Oct 2021 22:58:17 +0000
Pods Statuses:    1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=04e524b2-388d-44a0-8304-9d4d27a60f84
           job-name=test-job
  Containers:
   pi:
    Image:      k8s.gcr.io/perl
... skipping 468 lines ...
  type: ClusterIP
status:
  loadBalancer: {}
Successful
message:kubectl-create kubectl-set
has:kubectl-set
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1034: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:1047: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:1054: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BI1020 22:58:27.950035   56921 namespace_controller.go:185] Namespace has been deleted test-jobs
core.sh:1058: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
... skipping 136 lines ...
(Bapps.sh:90: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:91: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:95: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:99: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:100: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
apps.sh:103: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:104: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 36 lines ...
Namespace:    namespace-1634770716-31934
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1634770716-31934
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1634770716-31934
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1634770716-31934
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1634770716-31934
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1634770716-31934
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1634770716-31934
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1634770716-31934
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 25 lines ...
(Bcore.sh:1240: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E1020 22:58:38.915766   56921 replica_set.go:205] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1634770716-31934  7cdc80d7-007a-4a3e-a8df-682e97d9d75c 2045 2 2021-10-20 22:58:37 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  [{kubectl Update v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kube-controller-manager Update v1 2021-10-20 22:58:37 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kubectl-create Update v1 2021-10-20 22:58:37 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003591b68 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I1020 22:58:38.935761   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-sblh6"
core.sh:1244: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1248: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1252: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1256: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I1020 22:58:39.420990   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-q6cmm"
core.sh:1260: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1264: Successful get rc frontend {{.spec.replicas}}: 3
... skipping 31 lines ...
(Bdeployment.apps "nginx-deployment" deleted
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I1020 22:58:41.463295   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-76b5cd66f5 to 3"
I1020 22:58:41.470279   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment-76b5cd66f5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-76b5cd66f5-8wqbp"
I1020 22:58:41.478214   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment-76b5cd66f5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-76b5cd66f5-ddkjv"
I1020 22:58:41.478728   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment-76b5cd66f5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-76b5cd66f5-gmvxx"
core.sh:1307: Successful get deployment nginx-deployment {{.spec.replicas}}: 3
(BE1020 22:58:41.592550   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/nginx-deployment exposed
core.sh:1311: Successful get service nginx-deployment {{(index .spec.ports 0).port}}: 80
(Bdeployment.apps "nginx-deployment" deleted
service "nginx-deployment" deleted
replicationcontroller/frontend created
I1020 22:58:42.004442   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-sbbpk"
... skipping 3 lines ...
(Bservice/frontend exposed
core.sh:1322: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bservice/frontend-2 exposed
core.sh:1326: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
(Bpod/valid-pod created
service/frontend-3 exposed
E1020 22:58:42.817961   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1331: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 444
(Bservice/frontend-4 exposed
core.sh:1335: Successful get service frontend-4 {{(index .spec.ports 0).port}}: 80
(Bpod "valid-pod" deleted
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
Successful
message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1403: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1407: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
E1020 22:58:45.967433   80027 run.go:120] "command failed" err="required flag(s) \"max\" not set"
replicationcontroller "frontend" deleted
core.sh:1416: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 24 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I1020 22:58:46.500348   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-748ddcb48b to 3"
I1020 22:58:46.510247   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment-resources-748ddcb48b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-748ddcb48b-k9glx"
I1020 22:58:46.515099   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment-resources-748ddcb48b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-748ddcb48b-qwnnx"
I1020 22:58:46.515371   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment-resources-748ddcb48b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-748ddcb48b-5vlzz"
core.sh:1422: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1423: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1424: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I1020 22:58:46.851095   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-7bfb7d56b6 to 1"
I1020 22:58:46.879397   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment-resources-7bfb7d56b6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-7bfb7d56b6-rd56k"
core.sh:1427: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1428: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I1020 22:58:47.192384   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-resources-748ddcb48b to 2"
I1020 22:58:47.227089   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-75dbcccf44 to 1"
I1020 22:58:47.235310   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment-resources-748ddcb48b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-resources-748ddcb48b-qwnnx"
I1020 22:58:47.235783   56921 event.go:294] "Event occurred" object="namespace-1634770716-31934/nginx-deployment-resources-75dbcccf44" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-75dbcccf44-nqkv8"
core.sh:1433: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 155 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1444: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1445: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1446: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
(Bdeployment.apps "nginx-deployment-resources" deleted
+++ exit code: 0
Recording: run_deployment_tests
Running command: run_deployment_tests
E1020 22:58:48.443096   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource

+++ Running case: test-cmd.run_deployment_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_deployment_tests
+++ [1020 22:58:48] Creating namespace namespace-1634770728-14950
namespace/namespace-1634770728-14950 created
... skipping 36 lines ...
                pod-template-hash=69dd6dcd84
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=69dd6dcd84
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 45 lines ...
I1020 22:58:49.470850   80635 round_trippers.go:541] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
I1020 22:58:49.492134   80635 round_trippers.go:541] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1634770728-14950/deployments?limit=500 200 OK in 2 milliseconds
I1020 22:58:49.495223   80635 round_trippers.go:541] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1634770728-14950/deployments/test-nginx-apps 200 OK in 1 milliseconds
I1020 22:58:49.503740   80635 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1634770728-14950/events?fieldSelector=involvedObject.uid%3D34a08cb1-7bfc-49d0-a55e-ee750db63adb%2CinvolvedObject.name%3Dtest-nginx-apps%2CinvolvedObject.namespace%3Dnamespace-1634770728-14950%2CinvolvedObject.kind%3DDeployment&limit=500 200 OK in 1 milliseconds
I1020 22:58:49.506038   80635 round_trippers.go:541] GET https://127.0.0.1:6443/apis/apps/v1/namespaces/namespace-1634770728-14950/replicasets?labelSelector=app%3Dtest-nginx-apps&limit=500 200 OK in 1 milliseconds
(Bdeployment.apps "test-nginx-apps" deleted
E1020 22:58:49.731757   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:222: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-with-command created (dry run)
deployment.apps/nginx-with-command created (server dry run)
apps.sh:226: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-with-command created
I1020 22:58:50.101023   56921 event.go:294] "Event occurred" object="namespace-1634770728-14950/nginx-with-command" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-with-command-978c69cb6 to 1"
... skipping 66 lines ...
apps.sh:311: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(B    Image:	k8s.gcr.io/nginx:test-cmd
deployment.apps/nginx rolled back (server dry run)
apps.sh:315: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
apps.sh:319: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:322: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
apps.sh:326: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I1020 22:58:57.746114   56921 event.go:294] "Event occurred" object="namespace-1634770728-14950/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-54785cbcb8 to 2"
I1020 22:58:57.766434   56921 event.go:294] "Event occurred" object="namespace-1634770728-14950/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-ddb9ccfc6 to 1"
I1020 22:58:57.766512   56921 event.go:294] "Event occurred" object="namespace-1634770728-14950/nginx-54785cbcb8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-54785cbcb8-c2pzt"
I1020 22:58:57.775585   56921 event.go:294] "Event occurred" object="namespace-1634770728-14950/nginx-ddb9ccfc6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-ddb9ccfc6-zd2r4"
Successful
... skipping 80 lines ...
(Bapps.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I1020 22:59:00.288165   56921 event.go:294] "Event occurred" object="namespace-1634770728-14950/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-6dd48b9849 to 1"
I1020 22:59:00.295137   56921 event.go:294] "Event occurred" object="namespace-1634770728-14950/nginx-deployment-6dd48b9849" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-6dd48b9849-rxc2j"
apps.sh:373: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:374: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
I1020 22:59:00.551881   56921 horizontal.go:361] Horizontal Pod Autoscaler frontend has been deleted in namespace-1634770716-31934
deployment.apps/nginx-deployment image updated
apps.sh:379: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:380: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:383: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
... skipping 48 lines ...
deployment.apps/nginx-deployment env updated
deployment.apps/nginx-deployment env updated
I1020 22:59:04.020559   56921 event.go:294] "Event occurred" object="namespace-1634770728-14950/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-b8c4df945 to 0"
deployment.apps/nginx-deployment env updated
I1020 22:59:04.156025   56921 event.go:294] "Event occurred" object="namespace-1634770728-14950/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-5f8c874568 to 1"
Successful
message:error: standard input cannot be used for multiple arguments
has:standard input cannot be used for multiple arguments
deployment.apps "nginx-deployment" deleted
I1020 22:59:04.263245   56921 event.go:294] "Event occurred" object="namespace-1634770728-14950/nginx-deployment-5f8c874568" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-5f8c874568-ddhms"
I1020 22:59:04.329214   56921 event.go:294] "Event occurred" object="namespace-1634770728-14950/nginx-deployment-b8c4df945" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-b8c4df945-vzhtf"
configmap "test-set-env-config" deleted
E1020 22:59:04.374413   56921 replica_set.go:536] sync "namespace-1634770728-14950/nginx-deployment-68d657fb6" failed with replicasets.apps "nginx-deployment-68d657fb6" not found
secret "test-set-env-secret" deleted
+++ exit code: 0
Recording: run_rs_tests
Running command: run_rs_tests

+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
+++ [1020 22:59:04] Creating namespace namespace-1634770744-2522
namespace/namespace-1634770744-2522 created
E1020 22:59:04.610343   56921 replica_set.go:536] sync "namespace-1634770728-14950/nginx-deployment-5f8c874568" failed with replicasets.apps "nginx-deployment-5f8c874568" not found
Context "test" modified.
+++ [1020 22:59:04] Testing kubectl(v1:replicasets)
E1020 22:59:04.660228   56921 replica_set.go:536] sync "namespace-1634770728-14950/nginx-deployment-b8c4df945" failed with replicasets.apps "nginx-deployment-b8c4df945" not found
apps.sh:553: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
+++ [1020 22:59:04] Deleting rs
I1020 22:59:04.877891   56921 event.go:294] "Event occurred" object="namespace-1634770744-2522/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-gftl6"
I1020 22:59:04.883579   56921 event.go:294] "Event occurred" object="namespace-1634770744-2522/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-gbf7t"
I1020 22:59:04.883644   56921 event.go:294] "Event occurred" object="namespace-1634770744-2522/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-jlgv2"
replicaset.apps "frontend" deleted
apps.sh:559: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(BE1020 22:59:05.060730   56921 replica_set.go:536] sync "namespace-1634770744-2522/frontend" failed with replicasets.apps "frontend" not found
apps.sh:563: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I1020 22:59:05.285181   56921 event.go:294] "Event occurred" object="namespace-1634770744-2522/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-hbl68"
I1020 22:59:05.291995   56921 event.go:294] "Event occurred" object="namespace-1634770744-2522/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-4l2s5"
I1020 22:59:05.292205   56921 event.go:294] "Event occurred" object="namespace-1634770744-2522/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-pvwdz"
apps.sh:567: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [1020 22:59:05] Deleting rs
replicaset.apps "frontend" deleted
E1020 22:59:05.510104   56921 replica_set.go:536] sync "namespace-1634770744-2522/frontend" failed with replicasets.apps "frontend" not found
apps.sh:571: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:573: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-4l2s5" deleted
pod "frontend-hbl68" deleted
pod "frontend-pvwdz" deleted
apps.sh:576: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
Namespace:    namespace-1634770744-2522
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1634770744-2522
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1634770744-2522
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1634770744-2522
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1634770744-2522
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1634770744-2522
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1634770744-2522
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1634770744-2522
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 225 lines ...
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:716: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(BSuccessful
message:kubectl-autoscale
has:kubectl-autoscale
horizontalpodautoscaler.autoscaling "frontend" deleted
E1020 22:59:13.754977   83692 run.go:120] "command failed" err="required flag(s) \"max\" not set"
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 74 lines ...
(Bapps.sh:475: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:476: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps/nginx rolled back
apps.sh:479: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:480: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:484: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:485: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:488: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(Bapps.sh:489: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 33 lines ...
(Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/mock created
replicationcontroller/mock created
I1020 22:59:19.191918   56921 event.go:294] "Event occurred" object="namespace-1634770758-10438/mock" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-htx57"
generic-resources.sh:72: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock:
(Bgeneric-resources.sh:80: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock:
(BE1020 22:59:19.399909   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
NAME           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/mock   ClusterIP   10.0.0.188   <none>        99/TCP    0s

NAME                         DESIRED   CURRENT   READY   AGE
replicationcontroller/mock   1         1         0       0s
Name:              mock
... skipping 16 lines ...
Name:         mock
Namespace:    namespace-1634770758-10438
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 11 lines ...
replicationcontroller/mock replaced
I1020 22:59:19.908590   56921 event.go:294] "Event occurred" object="namespace-1634770758-10438/mock" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-ht9cc"
generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced
(Bgeneric-resources.sh:102: Successful get rc mock {{.metadata.labels.status}}: replaced
(Bservice/mock edited
replicationcontroller/mock edited
E1020 22:59:20.315550   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:114: Successful get services mock {{.metadata.labels.status}}: edited
(Bgeneric-resources.sh:120: Successful get rc mock {{.metadata.labels.status}}: edited
(Bservice/mock labeled
replicationcontroller/mock labeled
generic-resources.sh:134: Successful get services mock {{.metadata.labels.labeled}}: true
(Bgeneric-resources.sh:140: Successful get rc mock {{.metadata.labels.labeled}}: true
... skipping 36 lines ...
Name:         mock
Namespace:    namespace-1634770758-10438
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 59 lines ...
Name:         mock
Namespace:    namespace-1634770758-10438
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 15 lines ...
(Bservice/mock edited
replicationcontroller/mock edited
generic-resources.sh:114: Successful get services mock {{.metadata.labels.status}}: edited
(Bgeneric-resources.sh:120: Successful get rc mock {{.metadata.labels.status}}: edited
(Bservice/mock labeled
replicationcontroller/mock labeled
E1020 22:59:25.107124   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:134: Successful get services mock {{.metadata.labels.labeled}}: true
(Bgeneric-resources.sh:140: Successful get rc mock {{.metadata.labels.labeled}}: true
(Bservice/mock annotated
replicationcontroller/mock annotated
generic-resources.sh:153: Successful get services mock {{.metadata.annotations.annotated}}: true
(Bgeneric-resources.sh:159: Successful get rc mock {{.metadata.annotations.annotated}}: true
... skipping 14 lines ...
Namespace:    namespace-1634770758-10438
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1634770758-10438
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 110 lines ...
+++ [1020 22:59:31] Creating namespace namespace-1634770771-947
namespace/namespace-1634770771-947 created
Context "test" modified.
+++ [1020 22:59:31] Testing persistent volumes
storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E1020 22:59:31.516607   56921 pv_protection_controller.go:118] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
E1020 22:59:31.528998   56921 pv_protection_controller.go:118] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(Bpersistentvolume "pv0001" deleted
persistentvolume/pv0002 created
storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
(Bpersistentvolume "pv0002" deleted
persistentvolume/pv0003 created
E1020 22:59:32.203462   56921 pv_protection_controller.go:118] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
(Bquery for persistentvolumes had limit param
query for events had limit param
query for persistentvolumes had user-specified limit param
Successful describe persistentvolumes verbose logs:
I1020 22:59:32.334166   86465 loader.go:372] Config loaded from file:  /tmp/tmp.esUsaGxHM9/.kube/config
... skipping 101 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Wed, 20 Oct 2021 22:54:11 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Wed, 20 Oct 2021 22:54:11 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 32 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Wed, 20 Oct 2021 22:54:11 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Wed, 20 Oct 2021 22:54:11 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 39 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Wed, 20 Oct 2021 22:54:11 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Wed, 20 Oct 2021 22:54:11 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Wed, 20 Oct 2021 22:54:11 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 30 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Wed, 20 Oct 2021 22:54:11 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Wed, 20 Oct 2021 22:54:11 +0000   Wed, 20 Oct 2021 22:55:11 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 141 lines ...
Successful
message:yes
has:yes
Successful
message:yes
has:yes
E1020 22:59:40.493611   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Warning: the server doesn't have a resource type 'invalid_resource'
yes
has:the server doesn't have a resource type
Successful
message:yes
has:yes
Successful
message:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
Successful
message:yes
0
has:0
... skipping 59 lines ...
		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
legacy-script.sh:848: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
(Blegacy-script.sh:849: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
(Blegacy-script.sh:850: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
(Blegacy-script.sh:851: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
(BSuccessful
message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
has:only rbac.authorization.k8s.io/v1 is supported
rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
role.rbac.authorization.k8s.io "testing-R" deleted
warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
... skipping 434 lines ...
message:Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
No resources found
has:PodSecurityPolicy is deprecated
Successful
message:Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
No resources found
E1020 23:00:00.208196   89409 run.go:120] "command failed" err="1 warning received"
has:PodSecurityPolicy is deprecated
Successful
message:Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
No resources found
E1020 23:00:00.208196   89409 run.go:120] "command failed" err="1 warning received"
has:err="1 warning received"
Recording: run_template_output_tests
Running command: run_template_output_tests

+++ Running case: test-cmd.run_template_output_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 93 lines ...
replicationcontroller/cassandra created
I1020 23:00:01.964968   56921 event.go:294] "Event occurred" object="namespace-1634770800-25280/cassandra" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-fqmrn"
I1020 23:00:02.006704   56921 event.go:294] "Event occurred" object="namespace-1634770800-25280/cassandra" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-g9tsd"
Successful
message:cassandra:
has:cassandra:
E1020 23:00:02.101755   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	reconciliation required create
	missing rules added:
		{Verbs:[create delete deletecollection get list patch update watch] APIGroups:[] Resources:[pods] ResourceNames:[] NonResourceURLs:[]}
	reconciliation required create
	missing subjects added:
		{Kind:Group APIGroup:rbac.authorization.k8s.io Name:system:masters Namespace:}
... skipping 235 lines ...
I1020 23:00:06.116323   90287 round_trippers.go:541] GET https://127.0.0.1:6443/apis/certificates.k8s.io/v1/certificatesigningrequests?limit=500 200 OK in 1 milliseconds
I1020 23:00:06.118810   90287 round_trippers.go:541] GET https://127.0.0.1:6443/apis/certificates.k8s.io/v1/certificatesigningrequests/foo 200 OK in 1 milliseconds
I1020 23:00:06.127477   90287 round_trippers.go:541] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.uid%3D18b078bb-5a34-4353-8b99-6f490c294841%2CinvolvedObject.name%3Dfoo%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DCertificateSigningRequest&limit=500 200 OK in 7 milliseconds
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificate.sh:36: Successful get csr {{range.items}}{{.metadata.name}}{{end}}: 
(Bcertificatesigningrequest.certificates.k8s.io/foo created
E1020 23:00:06.671676   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
certificate.sh:39: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: 
(Bcertificatesigningrequest.certificates.k8s.io/foo approved
{
    "apiVersion": "v1",
    "items": [
        {
... skipping 203 lines ...
node/127.0.0.1 cordoned (server dry run)
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: namespace-1634770808-17302/test-pod-1
evicting pod namespace-1634770808-17302/test-pod-1 (server dry run)
node/127.0.0.1 drained (server dry run)
node-management.sh:140: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2,
(BWARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: namespace-1634770808-17302/test-pod-1
E1020 23:00:12.475885   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1020 23:00:19.869747   56921 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E1020 23:00:27.141884   56921 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
{"level":"warn","ts":"2021-10-20T23:00:27.141Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f4afc0/#initially=[http://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
E1020 23:00:27.142033   53374 status.go:71] apiserver received an error that is not an metav1.Status: context.deadlineExceededError{}: context deadline exceeded
E1020 23:00:27.142124   53374 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
E1020 23:00:27.144007   53374 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E1020 23:00:27.145195   53374 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
I1020 23:00:27.146403   53374 trace.go:205] Trace[10720704]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.23.0 (linux/amd64) kubernetes/c269494/leader-election,audit-id:64c38848-55fc-4c8b-a4e1-f82c5d857a55,client:127.0.0.1,accept:application/json, */*,protocol:HTTP/2.0 (20-Oct-2021 23:00:22.142) (total time: 5004ms):
Trace[10720704]: [5.004202466s] [5.004202466s] END
E1020 23:00:27.147859   53374 timeout.go:137] post-timeout activity - time-elapsed: 6.048451ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager" result: <nil>
{"level":"warn","ts":"2021-10-20T23:00:27.492Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000eeba40/#initially=[http://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
E1020 23:00:32.141843   56921 leaderelection.go:330] error retrieving resource lock kube-system/kube-controller-manager: Get "https://127.0.0.1:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager?timeout=5s": context deadline exceeded
I1020 23:00:32.141927   56921 leaderelection.go:283] failed to renew lease kube-system/kube-controller-manager: timed out waiting for the condition
F1020 23:00:32.141972   56921 controllermanager.go:290] leaderelection lost
{"level":"warn","ts":"2021-10-20T23:00:32.141Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f4afc0/#initially=[http://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
E1020 23:00:32.142039   53374 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
E1020 23:00:32.142128   53374 writers.go:117] apiserver was unable to write a JSON response: http: Handler timeout
E1020 23:00:32.143287   53374 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
E1020 23:00:32.144473   53374 writers.go:130] apiserver was unable to write a fallback JSON response: http: Handler timeout
I1020 23:00:32.145601   53374 trace.go:205] Trace[2022993662]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.23.0 (linux/amd64) kubernetes/c269494/leader-election,audit-id:74759a9e-bc38-4be4-baaa-d750bb17e86b,client:127.0.0.1,accept:application/json, */*,protocol:HTTP/2.0 (20-Oct-2021 23:00:29.144) (total time: 3001ms):
Trace[2022993662]: [3.001449018s] [3.001449018s] END
E1020 23:00:32.145948   53374 timeout.go:137] post-timeout activity - time-elapsed: 3.990184ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager" result: <nil>
goroutine 181 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0x1)
... skipping 12130 lines ...
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:105 +0x11c
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch.NewStreamWatcher
	/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch/streamwatcher.go:76 +0x135
I1020 23:00:32.142052   56921 pv_controller_base.go:324] Shutting down persistent volume controller
I1020 23:00:32.156580   56921 namespace_controller.go:212] Shutting down namespace controller
I1020 23:00:32.230433   56921 pv_controller_base.go:534] claim worker queue shutting down
{"level":"warn","ts":"2021-10-20T23:00:32.239Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0006a6e00/#initially=[http://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
E1020 23:00:32.239511   53374 wrap.go:54] timeout or abort while handling: method=POST URI="/api/v1/namespaces/default/events" audit-ID="fb1356b2-017c-4096-9d7c-ee480d073672"
E1020 23:00:32.239563   53374 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 23.278µs, panicked: false, err: context canceled, panic-reason: <nil>
E1020 23:00:32.239585   53374 timeout.go:137] post-timeout activity - time-elapsed: 3.362µs, POST "/api/v1/namespaces/default/events" result: <nil>
{"level":"warn","ts":"2021-10-20T23:00:34.492Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000b01c00/#initially=[http://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
{"level":"warn","ts":"2021-10-20T23:00:34.492Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0012f6540/#initially=[http://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
{"level":"warn","ts":"2021-10-20T23:00:34.492Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000f2ae00/#initially=[http://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
{"level":"warn","ts":"2021-10-20T23:00:34.493Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0013d0fc0/#initially=[http://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
{"level":"warn","ts":"2021-10-20T23:00:34.493Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000ebb180/#initially=[http://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
{"level":"warn","ts":"2021-10-20T23:00:34.493Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0008c6e00/#initially=[http://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Unavailable desc = etcdserver: request timed out"}
I1020 23:00:34.963946   53374 trace.go:205] Trace[810343887]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.23.0 (linux/amd64) kubernetes/c269494,audit-id:9c552f6e-bce3-4ed0-892e-945f5db5db5d,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (20-Oct-2021 23:00:27.487) (total time: 7476ms):
Trace[810343887]: ---"About to write a response" 7476ms (23:00:34.963)
Trace[810343887]: [7.476870417s] [7.476870417s] END
I1020 23:00:34.964125   53374 trace.go:205] Trace[1488045158]: "Get" url:/api/v1/namespaces/namespace-1634770808-17302/pods/test-pod-1,user-agent:kubectl/v1.23.0 (linux/amd64) kubernetes/c269494,audit-id:31d9cf22-0168-438e-95fb-06b5aa1f6ad5,client:127.0.0.1,accept:application/json, */*,protocol:HTTP/2.0 (20-Oct-2021 23:00:20.864) (total time: 14099ms):
Trace[1488045158]: ---"About to write a response" 14099ms (23:00:34.963)
Trace[1488045158]: [14.099749571s] [14.099749571s] END
... skipping 20 lines ...
message:node/127.0.0.1 already uncordoned (server dry run)
has:already uncordoned
node-management.sh:161: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode/127.0.0.1 labeled
node-management.sh:166: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(BSuccessful
message:error: cannot specify both a node name and a --selector option
See 'kubectl drain -h' for help and examples
has:cannot specify both a node name
node-management.sh:172: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(Bnode-management.sh:174: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode-management.sh:176: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2,
(BSuccessful
... skipping 78 lines ...
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: namespace-1634770808-17302/test-pod-1, namespace-1634770808-17302/test-pod-2
evicting pod namespace-1634770808-17302/test-pod-1 (dry run)
evicting pod namespace-1634770808-17302/test-pod-2 (dry run)
node/127.0.0.1 drained (dry run)
has:/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&limit=500 200 OK
Successful
message:error: USAGE: cordon NODE [flags]
See 'kubectl cordon -h' for help and examples
has:error\: USAGE\: cordon NODE
node/127.0.0.1 already uncordoned
Successful
message:error: You must provide one or more resources by argument or filename.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
   '<resource> <name>'
   '<resource>'
has:must provide one or more resources
... skipping 18 lines ...
+++ [1020 23:00:46] Testing kubectl plugins
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/version/kubectl-version
  - warning: kubectl-version overwrites existing command: "kubectl version"
error: one plugin warning was found
has:kubectl-version overwrites existing command: "kubectl version"
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
  - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
error: one plugin warning was found
has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
has:plugins are available
Successful
message:Unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
error: unable to find any kubectl plugins in your PATH
has:unable to find any kubectl plugins in your PATH
Successful
message:I am plugin foo
has:plugin foo
Successful
message:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1
... skipping 10 lines ...

+++ Running case: test-cmd.run_impersonation_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_impersonation_tests
+++ [1020 23:00:46] Testing impersonation
Successful
message:error: requesting groups or user-extra for test-admin without impersonating a user
has:without impersonating a user
certificatesigningrequest.certificates.k8s.io/foo created
authorization.sh:57: Successful get csr/foo {{.spec.username}}: user1
(Bauthorization.sh:58: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificatesigningrequest.certificates.k8s.io/foo created
... skipping 108 lines ...
I1020 23:00:53.700471   53374 available_controller.go:503] Shutting down AvailableConditionController
I1020 23:00:53.700483   53374 naming_controller.go:302] Shutting down NamingConditionController
I1020 23:00:53.700689   53374 controller.go:122] Shutting down OpenAPI controller
E1020 23:00:53.703384   53374 controller.go:189] Unable to remove endpoints from kubernetes service: Get "https://127.0.0.1:6443/api/v1/namespaces/default/endpoints/kubernetes": dial tcp 127.0.0.1:6443: connect: connection refused
!!! [1020 23:00:53] Call tree:
!!! [1020 23:00:53]  1: hack/make-rules/test-cmd.sh:1 cleanup(...)
make: *** [Makefile:319: test-cmd] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...