This job view page is being replaced by Spyglass soon. Check out the new job view.
PRstlaz: Set all PSa labels in tests
ResultABORTED
Tests 0 failed / 143 succeeded
Started2023-05-26 11:48
Elapsed38m49s
Revision385f3e2a85aae5f07e75e4a2083a6e06a965b5fc
Refs 118280

No Test Failures!


Show 143 Passed Tests

Error lines from build-log.txt

... skipping 49 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 167: bogus-expected-to-fail: command not found
!!! [0526 11:49:08] Call tree:
!!! [0526 11:49:08]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0526 11:49:08]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0526 11:49:08]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:143 juLog(...)
!!! [0526 11:49:08]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:171 record_command(...)
!!! [0526 11:49:08]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0526 11:49:09] Running kubeadm tests
go: downloading go.uber.org/automaxprocs v1.5.2
+++ [0526 11:49:13] Setting GOMAXPROCS: 6
+++ [0526 11:49:14] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kubeadm (static)
+++ [0526 11:50:08] Setting GOMAXPROCS: 6
... skipping 225 lines ...
I0526 11:52:34.558631   19392 controller.go:85] Starting OpenAPI V3 controller
I0526 11:52:34.558652   19392 naming_controller.go:291] Starting NamingConditionController
I0526 11:52:34.558670   19392 establishing_controller.go:76] Starting EstablishingController
I0526 11:52:34.558687   19392 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0526 11:52:34.558701   19392 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0526 11:52:34.562839   19392 crd_finalizer.go:266] Starting CRDFinalizer
E0526 11:52:34.651541   19392 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
I0526 11:52:34.657747   19392 cache.go:39] Caches are synced for AvailableConditionController controller
I0526 11:52:34.657774   19392 shared_informer.go:318] Caches are synced for configmaps
I0526 11:52:34.657794   19392 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0526 11:52:34.657834   19392 shared_informer.go:318] Caches are synced for crd-autoregister
I0526 11:52:34.657853   19392 aggregator.go:152] initial CRD sync complete...
I0526 11:52:34.657861   19392 autoregister_controller.go:141] Starting autoregister controller
... skipping 19 lines ...
+++ [0526 11:52:36] Setting GOMAXPROCS: 6
+++ [0526 11:52:36] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kube-controller-manager (static)
+++ [0526 11:53:15] Generate kubeconfig for controller-manager
+++ [0526 11:53:15] Starting controller-manager
I0526 11:53:15.593171   22231 serving.go:348] Generated self-signed cert in-memory
W0526 11:53:15.995802   22231 authentication.go:446] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0526 11:53:15.995831   22231 authentication.go:339] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0526 11:53:15.995840   22231 authentication.go:363] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0526 11:53:15.995866   22231 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0526 11:53:15.995877   22231 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0526 11:53:15.996228   22231 controllermanager.go:187] "Starting" version="v1.28.0-alpha.0.1222+758aae693d5fbb"
I0526 11:53:15.996250   22231 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0526 11:53:15.998231   22231 secure_serving.go:210] Serving securely on [::]:10257
I0526 11:53:15.998355   22231 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0526 11:53:15.998568   22231 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager...
... skipping 9 lines ...
I0526 11:53:16.019665   22231 controllermanager.go:638] "Started controller" controller="ttl-after-finished"
W0526 11:53:16.019892   22231 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0526 11:53:16.019945   22231 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0526 11:53:16.019963   22231 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0526 11:53:16.019998   22231 controllermanager.go:638] "Started controller" controller="deployment"
I0526 11:53:16.020015   22231 controllermanager.go:616] "Warning: skipping controller" controller="nodeipam"
E0526 11:53:16.020209   22231 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
I0526 11:53:16.020231   22231 controllermanager.go:616] "Warning: skipping controller" controller="cloud-node-lifecycle"
W0526 11:53:16.020443   22231 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0526 11:53:16.020512   22231 controllermanager.go:638] "Started controller" controller="cronjob"
I0526 11:53:16.020536   22231 controllermanager.go:603] "Warning: controller is disabled" controller="tokencleaner"
W0526 11:53:16.020780   22231 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0526 11:53:16.020921   22231 controllermanager.go:638] "Started controller" controller="clusterrole-aggregation"
... skipping 146 lines ...
I0526 11:53:16.044464   22231 resource_quota_controller.go:295] "Starting resource quota controller"
I0526 11:53:16.044741   22231 shared_informer.go:311] Waiting for caches to sync for resource quota
I0526 11:53:16.044750   22231 controllermanager.go:638] "Started controller" controller="csrapproving"
I0526 11:53:16.044767   22231 resource_quota_monitor.go:291] "QuotaMonitor running"
I0526 11:53:16.044932   22231 certificate_controller.go:112] Starting certificate controller "csrapproving"
I0526 11:53:16.044951   22231 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
E0526 11:53:16.045186   22231 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
I0526 11:53:16.045203   22231 controllermanager.go:616] "Warning: skipping controller" controller="service"
I0526 11:53:16.049373   22231 shared_informer.go:311] Waiting for caches to sync for resource quota
W0526 11:53:16.060835   22231 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0526 11:53:16.064376   22231 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0526 11:53:16.064653   22231 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0526 11:53:16.064717   22231 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
... skipping 39 lines ...
I0526 11:53:16.436225   22231 shared_informer.go:318] Caches are synced for persistent volume
I0526 11:53:16.436408   22231 shared_informer.go:318] Caches are synced for stateful set
I0526 11:53:16.440707   22231 shared_informer.go:318] Caches are synced for daemon sets
I0526 11:53:16.445081   22231 shared_informer.go:318] Caches are synced for resource quota
I0526 11:53:16.450268   22231 shared_informer.go:318] Caches are synced for resource quota
node/127.0.0.1 created
I0526 11:53:16.730035   22231 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"127.0.0.1\" does not exist"
+++ [0526 11:53:16] Checking kubectl version
I0526 11:53:16.767443   22231 shared_informer.go:318] Caches are synced for garbage collector
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"28+", GitVersion:"v1.28.0-alpha.0.1222+758aae693d5fbb", GitCommit:"758aae693d5fbbad13f596d2ae0673b7e9409fef", GitTreeState:"clean", BuildDate:"2023-05-26T09:50:53Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"28+", GitVersion:"v1.28.0-alpha.0.1222+758aae693d5fbb", GitCommit:"758aae693d5fbbad13f596d2ae0673b7e9409fef", GitTreeState:"clean", BuildDate:"2023-05-26T09:50:53Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}
I0526 11:53:16.834923   22231 shared_informer.go:318] Caches are synced for garbage collector
I0526 11:53:16.834953   22231 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   41s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 196 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0526 11:53:22] Creating namespace namespace-1685102002-9446
namespace/namespace-1685102002-9446 created
Context "test" modified.
+++ [0526 11:53:22] Testing RESTMapper
+++ [0526 11:53:22] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 61 lines ...
namespace/namespace-1685102004-29032 created
Context "test" modified.
+++ [0526 11:53:24] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 152 lines ...
namespace/namespace-1685102011-19670 created
Context "test" modified.
+++ [0526 11:53:31] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 625 lines ...
has:valid-pod
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:214: Successful get pods -lname=valid-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name "test-kubectl-describe-pod" }}found{{end}}{{end}}:: :
... skipping 30 lines ...
I0526 11:53:50.574805   27374 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 1 milliseconds
I0526 11:53:50.576469   27374 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3Dfafa2eb5-53db-4a05-8439-0df91436a8ea%2CinvolvedObject.name%3Dtest-pdb-2&limit=500 200 OK in 1 milliseconds
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 242 lines ...
core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:3.9:
(BSuccessful
(Bmessage:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0526 11:54:06] "kubectl patch with resourceVersion 620" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
(Bmessage:kubectl-replace
has:kubectl-replace
Successful
(Bmessage:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
(Bmessage:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
I0526 11:54:07.672295   22231 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"node-v1-test\" does not exist"
core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
core.sh:655: Successful get node node-v1-test {{.metadata.annotations.a}}: b
... skipping 29 lines ...
spec:
  containers:
  - image: registry.k8s.io/pause:3.9
    name: kubernetes-pause
has:localonlyvalue
core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 84 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0526 11:54:18] Creating namespace namespace-1685102058-26454
namespace/namespace-1685102058-26454 created
Context "test" modified.
+++ [0526 11:54:18] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 63 lines ...
	If true, keep the managedFields when printing objects in JSON or YAML format.

    --template='':
	Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].

    --validate='strict':
	Must be one of: strict (or true), warn, ignore (or false). 		"true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not. 		"warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise. 		"false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields.

    --windows-line-endings=false:
	Only relevant if --edit=true. Defaults to the line ending native to your platform.

Usage:
  kubectl create -f FILENAME [options]
... skipping 38 lines ...
I0526 11:54:21.075035   22231 event.go:307] "Event occurred" object="namespace-1685102058-19765/test-deployment-retainkeys-d65c44c97" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-d65c44c97-fsq27"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
(Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/test-pod created (dry run)
pod/test-pod created (server dry run)
apply.sh:107: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 31 lines ...
(Bpod/b created
apply.sh:207: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:208: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
(Bmessage:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
pod/a created
pod/b created
I0526 11:54:30.273400   19392 alloc.go:330] "allocated clusterIPs" service="namespace-1685102058-19765/prune-svc" clusterIPs={"IPv4":"10.0.0.219"}
service/prune-svc created
W0526 11:54:30.273977   31454 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.
... skipping 44 lines ...
(Bpod/b unchanged
W0526 11:54:48.336233   31825 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.
pod/a pruned
apply.sh:265: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b:
(Bnamespace "nsb" deleted
Successful
(Bmessage:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:276: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:280: Successful get services a {{.metadata.name}}: a
(BSuccessful
(Bmessage:The Service "a" is invalid: spec.clusterIPs[0]: Invalid value: []string{"10.0.0.12"}: may not change once set
... skipping 28 lines ...
(Bapply.sh:302: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:303: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
(Bmessage:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:311: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
(Bmessage:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:319: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
I0526 11:54:59.971640   22231 namespace_controller.go:182] "Namespace has been deleted" namespace="nsb"
apply.sh:325: Successful get configmaps --field-selector=metadata.name=foo {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:configmap/foo created
error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
ensure CRDs are installed first
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:331: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:337: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
... skipping 7 lines ...
pod "pod-c" deleted
apply.sh:345: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:349: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0526 11:55:05.168837   19392 handler.go:232] Adding GroupVersion example.com v1 to ResourceManager
Successful
(Bmessage:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
ensure CRDs are installed first
has:no matches for kind "Widget" in version "example.com/v1"
customresourcedefinition.apiextensions.k8s.io/widgets.example.com condition met
Successful
(Bmessage:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:356: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI0526 11:55:07.745643   19392 controller.go:624] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
apply.sh:359: Successful get widget foo {{.metadata.name}}: foo
... skipping 34 lines ...
(Bmessage:900
has:900
pod "test-pod" deleted
apply.sh:415: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ [0526 11:55:11] Testing upgrade kubectl client-side apply to server-side apply
pod/test-pod created
error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
... skipping 157 lines ...
(Bpod "nginx-extensions" deleted
Successful
(Bmessage:pod/test1 created
has:pod/test1 created
pod "test1" deleted
W0526 11:55:17.611616   19392 cacher.go:171] Terminating all watchers from cacher resources.mygroup.example.com
E0526 11:55:17.613119   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0526 11:55:17] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
(Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 4 lines ...
namespace/namespace-1685102118-28002 created
Context "test" modified.
+++ [0526 11:55:18] Testing kubectl apply deployments
apps.sh:150: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:151: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0526 11:55:18.886214   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:55:18.886257   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/my-depl created
I0526 11:55:19.076015   22231 event.go:307] "Event occurred" object="namespace-1685102118-28002/my-depl" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set my-depl-bfb57d6df to 1"
I0526 11:55:19.081672   22231 event.go:307] "Event occurred" object="namespace-1685102118-28002/my-depl-bfb57d6df" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: my-depl-bfb57d6df-8spwd"
apps.sh:156: Successful get deployments my-depl {{.metadata.name}}: my-depl
(Bapps.sh:158: Successful get deployments my-depl {{.spec.template.metadata.labels.l1}}: l1
(Bapps.sh:159: Successful get deployments my-depl {{.spec.selector.matchLabels.l1}}: l1
... skipping 12 lines ...
(Bdeployment.apps/nginx created
I0526 11:55:20.552549   22231 event.go:307] "Event occurred" object="namespace-1685102118-28002/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-5645b79496 to 3"
I0526 11:55:20.556278   22231 event.go:307] "Event occurred" object="namespace-1685102118-28002/nginx-5645b79496" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5645b79496-mqs49"
I0526 11:55:20.560251   22231 event.go:307] "Event occurred" object="namespace-1685102118-28002/nginx-5645b79496" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5645b79496-jmm97"
I0526 11:55:20.560469   22231 event.go:307] "Event occurred" object="namespace-1685102118-28002/nginx-5645b79496" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5645b79496-jpxlf"
apps.sh:183: Successful get deployment nginx {{.metadata.name}}: nginx
(BW0526 11:55:21.012910   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:55:21.012942   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 11:55:24.750076   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:55:24.750127   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1685102118-28002\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"registry.k8s.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1685102118-28002"
for: "hack/testdata/deployment-label-change2.yaml": error when patching "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I0526 11:55:29.163336   22231 event.go:307] "Event occurred" object="namespace-1685102118-28002/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-5675dfc785 to 3"
I0526 11:55:29.167578   22231 event.go:307] "Event occurred" object="namespace-1685102118-28002/nginx-5675dfc785" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5675dfc785-sw4xs"
I0526 11:55:29.171343   22231 event.go:307] "Event occurred" object="namespace-1685102118-28002/nginx-5675dfc785" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5675dfc785-k8b6q"
I0526 11:55:29.172352   22231 event.go:307] "Event occurred" object="namespace-1685102118-28002/nginx-5675dfc785" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5675dfc785-xdnwx"
Successful
... skipping 154 lines ...
+  phase: Pending
+  qosClass: BestEffort
has:test-pod
diff.sh:78: Successful get pod {{range.items}}{{ if eq .metadata.name "test-pod" }}found{{end}}{{end}}:: :
(Bpod/test-pod serverside-applied
diff.sh:82: Successful get pod {{range.items}}{{ if eq .metadata.name "test-pod" }}found{{end}}{{end}}:: found:
(BW0526 11:55:36.057135   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:55:36.057168   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:diff -u -N /tmp/LIVE-527940319/v1.Pod.namespace-1685102133-2634.test-pod /tmp/MERGED-2914169037/v1.Pod.namespace-1685102133-2634.test-pod
--- /tmp/LIVE-527940319/v1.Pod.namespace-1685102133-2634.test-pod	2023-05-26 11:55:36.069928846 +0000
+++ /tmp/MERGED-2914169037/v1.Pod.namespace-1685102133-2634.test-pod	2023-05-26 11:55:36.069928846 +0000
@@ -10,7 +10,7 @@
   uid: 8bec1c45-36b1-45c8-ac83-449fdb9161c7
... skipping 374 lines ...
+++ [0526 11:55:54] Creating namespace namespace-1685102154-17476
namespace/namespace-1685102154-17476 created
Context "test" modified.
+++ [0526 11:55:54] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:{
    "apiVersion": "v1",
    "items": [],
... skipping 21 lines ...
has not:No resources found
Successful
(Bmessage:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
(Bmessage:No resources found in namespace-1685102154-17476 namespace.
has:No resources found
Successful
(Bmessage:
has not:No resources found
Successful
(Bmessage:No resources found in namespace-1685102154-17476 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
(Bmessage:Error from server (NotFound): pods "abc" not found
has not:List
Successful
(Bmessage:I0526 11:55:55.921984   35067 loader.go:395] Config loaded from file:  /tmp/tmp.B29w6CjUov/.kube/config
I0526 11:55:55.927355   35067 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
I0526 11:55:55.945658   35067 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0526 11:55:55.947325   35067 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 413 lines ...
system:service-account-issuer-discovery                                2023-05-26T11:52:35Z
system:volume-scheduler                                                2023-05-26T11:52:35Z
url-reader                                                             2023-05-26T11:53:26Z
view                                                                   2023-05-26T11:52:35Z
has:/clusterroles?limit=500 200 OK
I0526 11:55:58.928678   22231 namespace_controller.go:182] "Namespace has been deleted" namespace="nsbprune"
W0526 11:55:58.990112   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:55:58.990156   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:default                      Active   3m22s
kube-node-lease              Active   3m22s
kube-public                  Active   3m22s
kube-system                  Active   3m22s
namespace-1685101998-27997   Active   2m38s
... skipping 172 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
(Bmessage:valid-pod:
has:valid-pod:
Successful
(Bmessage:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2023-05-26T11:56:03Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2023-05-26T11:56:03Z"}}, "name":"valid-pod", "namespace":"namespace-1685102163-31560", "resourceVersion":"1134", "uid":"063dcc1d-418e-426d-919c-be7de0c3ab1b"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"registry.k8s.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "preemptionPolicy":"PreemptLowerPriority", "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
(Bmessage:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2023-05-26T11:56:03Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2023-05-26T11:56:03Z"}],"name":"valid-pod","namespace":"namespace-1685102163-31560","resourceVersion":"1134","uid":"063dcc1d-418e-426d-919c-be7de0c3ab1b"},"spec":{"containers":[{"image":"registry.k8s.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2023-05-26T11:56:03Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2023-05-26T11:56:03Z]] name:valid-pod namespace:namespace-1685102163-31560 resourceVersion:1134 uid:063dcc1d-418e-426d-919c-be7de0c3ab1b] spec:map[containers:[map[image:registry.k8s.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true preemptionPolicy:PreemptLowerPriority priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
(Bmessage:Error from server (NotFound): the server could not find the requested resource
has:the server could not find the requested resource
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:STATUS
Successful
... skipping 78 lines ...
  terminationGracePeriodSeconds: 30
status:
  phase: Pending
  qosClass: Guaranteed
has:name: valid-pod
Successful
(Bmessage:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:204: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 1140 lines ...
+++ [0526 11:56:17] Creating namespace namespace-1685102177-8117
namespace/namespace-1685102177-8117 created
Context "test" modified.
+++ [0526 11:56:18] Testing kubectl exec POD COMMAND
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
(Bmessage:error: cannot exec into multiple objects at a time
has:cannot exec into multiple objects at a time
pod/test-pod created
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [0526 11:56:18] Creating namespace namespace-1685102178-10862
namespace/namespace-1685102178-10862 created
Context "test" modified.
+++ [0526 11:56:18] Testing kubectl exec TYPE/NAME COMMAND
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0526 11:56:19.398760   22231 event.go:307] "Event occurred" object="namespace-1685102178-10862/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-mgb9v"
I0526 11:56:19.402591   22231 event.go:307] "Event occurred" object="namespace-1685102178-10862/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-vsjgr"
I0526 11:56:19.403425   22231 event.go:307] "Event occurred" object="namespace-1685102178-10862/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-h429w"
configmap/test-set-env-config created
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-h429w does not have a host assigned
has not:not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-h429w does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
(Bmessage:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
(Bmessage:user-specified
has:user-specified
Successful
(Bmessage:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
(B{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"70761423-dbd5-4816-82cd-3a9be76b6fcc","resourceVersion":"1234","creationTimestamp":"2023-05-26T11:56:20Z"}}
Successful
(Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"70761423-dbd5-4816-82cd-3a9be76b6fcc","resourceVersion":"1236","creationTimestamp":"2023-05-26T11:56:20Z"},"data":{"key1":"config1"}}
has:uid
Successful
(Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"70761423-dbd5-4816-82cd-3a9be76b6fcc","resourceVersion":"1236","creationTimestamp":"2023-05-26T11:56:20Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"70761423-dbd5-4816-82cd-3a9be76b6fcc"}}
Successful
(Bmessage:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 25 lines ...
+++ command: run_kubectl_create_validate_tests
+++ [0526 11:56:21] Creating namespace namespace-1685102181-22645
namespace/namespace-1685102181-22645 created
Context "test" modified.
+++ [0526 11:56:22] Testing kubectl create --validate
Successful
message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo"
has either:strict decoding error
or:error validating data
+++ [0526 11:56:22] Testing kubectl create --validate=true
Successful
message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo"
has either:strict decoding error
or:error validating data
+++ [0526 11:56:22] Testing kubectl create --validate=false
I0526 11:56:22.531388   22231 event.go:307] "Event occurred" object="namespace-1685102181-22645/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-cbdccf466 to 4"
Successful
(Bmessage:deployment.apps/invalid-nginx-deployment created
has:deployment.apps/invalid-nginx-deployment created
I0526 11:56:22.537243   22231 event.go:307] "Event occurred" object="namespace-1685102181-22645/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-n8rbg"
I0526 11:56:22.540864   22231 event.go:307] "Event occurred" object="namespace-1685102181-22645/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-6pgjv"
I0526 11:56:22.545567   22231 event.go:307] "Event occurred" object="namespace-1685102181-22645/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-5tvq2"
I0526 11:56:22.550525   22231 event.go:307] "Event occurred" object="namespace-1685102181-22645/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-gmhjb"
deployment.apps "invalid-nginx-deployment" deleted
+++ [0526 11:56:22] Testing kubectl create --validate=strict
Successful
message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo"
has either:strict decoding error
or:error validating data
I0526 11:56:22.851859   22231 namespace_controller.go:182] "Namespace has been deleted" namespace="test-events"
+++ [0526 11:56:22] Testing kubectl create --validate=warn
Warning: unknown field "spec.baz"
Warning: unknown field "spec.foo"
I0526 11:56:23.060305   22231 event.go:307] "Event occurred" object="namespace-1685102181-22645/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-cbdccf466 to 4"
Successful
... skipping 13 lines ...
I0526 11:56:23.225677   22231 event.go:307] "Event occurred" object="namespace-1685102181-22645/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-64ddm"
I0526 11:56:23.225705   22231 event.go:307] "Event occurred" object="namespace-1685102181-22645/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-kkk6z"
I0526 11:56:23.230502   22231 event.go:307] "Event occurred" object="namespace-1685102181-22645/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-zb5t8"
deployment.apps "invalid-nginx-deployment" deleted
+++ [0526 11:56:23] Testing kubectl create
Successful
message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo"
has either:strict decoding error
or:error validating data
+++ [0526 11:56:23] Testing kubectl create --validate=foo
Successful
(Bmessage:error: invalid - validate option "foo"; must be one of: strict (or true), warn, ignore (or false)
has:invalid - validate option "foo"
+++ exit code: 0
Recording: run_convert_tests
Running command: run_convert_tests

+++ Running case: test-cmd.run_convert_tests 
... skipping 50 lines ...
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:apps/v1beta1
deployment.apps "nginx" deleted
Successful
(Bmessage:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
Successful
(Bmessage:nginx:
has:nginx:
+++ exit code: 0
Recording: run_kubectl_delete_allnamespaces_tests
... skipping 103 lines ...
has:Timeout
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
(Bmessage:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 157 lines ...
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:296: Successful get foos/test {{.patched}}: value2
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:298: Successful get foos/test {{.patched}}: <no value>
(B+++ [0526 11:56:33] "kubectl patch --local" returns error as expected for CustomResource: error: strategic merge patch is not supported for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 226 lines ...
(Bcrd.sh:519: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:524: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:527: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I0526 11:56:58.182831   19392 handler.go:232] Adding GroupVersion company.com v1 to ResourceManager
I0526 11:56:58.189321   19392 handler.go:232] Adding GroupVersion company.com v1 to ResourceManager
I0526 11:56:58.197553   19392 handler.go:232] Adding GroupVersion company.com v1 to ResourceManager
I0526 11:56:58.355774   19392 handler.go:232] Adding GroupVersion company.com v1 to ResourceManager
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
... skipping 15 lines ...
+++ [0526 11:56:58] Testing recursive resources
+++ [0526 11:56:58] Creating namespace namespace-1685102218-12323
namespace/namespace-1685102218-12323 created
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0526 11:56:59.200994   19392 cacher.go:171] Terminating all watchers from cacher foos.company.com
E0526 11:56:59.203382   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 11:56:59.364878   19392 cacher.go:171] Terminating all watchers from cacher bars.company.com
E0526 11:56:59.366544   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 11:56:59.538675   19392 cacher.go:171] Terminating all watchers from cacher resources.mygroup.example.com
E0526 11:56:59.540984   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 11:56:59.709869   19392 cacher.go:171] Terminating all watchers from cacher validfoos.company.com
E0526 11:56:59.711326   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
(Bmessage:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0526 11:57:00.423470   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:00.423512   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
(Bmessage:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
W0526 11:57:00.597983   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:00.598020   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0526 11:57:00.672823   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:00.672867   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:Name:         busybox0
Namespace:    namespace-1685102218-12323
Priority:     0
Node:         <none>
Labels:       app=busybox0
... skipping 158 lines ...
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
(Bmessage:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
(Bmessage:Warning: resource pods/busybox0 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox0 configured
Warning: resource pods/busybox1 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:264: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
(Bmessage:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:273: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:278: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
(Bmessage:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:283: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:288: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
(Bmessage:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:293: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:297: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:302: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0526 11:57:02.552581   22231 event.go:307] "Event occurred" object="namespace-1685102218-12323/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-bszzt"
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0526 11:57:02.598198   22231 event.go:307] "Event occurred" object="namespace-1685102218-12323/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-q6jjh"
generic-resources.sh:306: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:311: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:312: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:313: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0526 11:57:02.961457   22231 namespace_controller.go:182] "Namespace has been deleted" namespace="non-native-resources"
generic-resources.sh:318: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80
(Bgeneric-resources.sh:319: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80
(BSuccessful
(Bmessage:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0526 11:57:03.390612   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:03.390645   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 11:57:03.421858   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:03.421901   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:328: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:329: Successful get rc busybox1 {{.spec.replicas}}: 1
(BW0526 11:57:03.547930   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:03.547965   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0526 11:57:03.605268   19392 alloc.go:330] "allocated clusterIPs" service="namespace-1685102218-12323/busybox0" clusterIPs={"IPv4":"10.0.0.250"}
I0526 11:57:03.612005   19392 alloc.go:330] "allocated clusterIPs" service="namespace-1685102218-12323/busybox1" clusterIPs={"IPv4":"10.0.0.112"}
generic-resources.sh:333: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:334: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
(Bmessage:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:340: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:341: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:342: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0526 11:57:04.072529   22231 event.go:307] "Event occurred" object="namespace-1685102218-12323/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-vxvs7"
I0526 11:57:04.079220   22231 event.go:307] "Event occurred" object="namespace-1685102218-12323/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-fbr4w"
generic-resources.sh:346: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:347: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
(Bmessage:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:356: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:361: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
I0526 11:57:04.751653   22231 event.go:307] "Event occurred" object="namespace-1685102218-12323/nginx1-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx1-deployment-69c599568 to 2"
I0526 11:57:04.755788   22231 event.go:307] "Event occurred" object="namespace-1685102218-12323/nginx1-deployment-69c599568" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-69c599568-77vrj"
I0526 11:57:04.758916   22231 event.go:307] "Event occurred" object="namespace-1685102218-12323/nginx1-deployment-69c599568" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-69c599568-t2m94"
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0526 11:57:04.775886   22231 event.go:307] "Event occurred" object="namespace-1685102218-12323/nginx0-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx0-deployment-5944978c6f to 2"
I0526 11:57:04.781001   22231 event.go:307] "Event occurred" object="namespace-1685102218-12323/nginx0-deployment-5944978c6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-5944978c6f-257j6"
I0526 11:57:04.784326   22231 event.go:307] "Event occurred" object="namespace-1685102218-12323/nginx0-deployment-5944978c6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-5944978c6f-r9nsv"
generic-resources.sh:365: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:registry.k8s.io/nginx:1.7.9:
(Bgeneric-resources.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:registry.k8s.io/nginx:1.7.9:
(BSuccessful
(Bmessage:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:378: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
(Bmessage:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 11 lines ...
has:Waiting for deployment "nginx1-deployment" rollout to finish
Successful
(Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available...
timed out waiting for the condition
unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
W0526 11:57:07.852283   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:07.852318   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available...
Waiting for deployment "nginx0-deployment" rollout to finish: 0 of 2 updated replicas are available...
timed out waiting for the condition
timed out waiting for the condition
unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 18 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
(Bmessage:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
(Bmessage:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
W0526 11:57:08.717470   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:08.717508   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0526 11:57:09.318683   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:09.318719   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:411: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0526 11:57:10.114844   22231 event.go:307] "Event occurred" object="namespace-1685102218-12323/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-qkfrq"
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0526 11:57:10.152302   22231 event.go:307] "Event occurred" object="namespace-1685102218-12323/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-l6cpf"
generic-resources.sh:415: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
(Bmessage:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
+++ exit code: 0
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0526 11:57:11] Testing kubectl(v1:namespaces)
Successful
(Bmessage:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
(Bmessage:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1504: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bquery for namespaces had limit param
query for resourcequotas had limit param
query for limitranges had limit param
... skipping 138 lines ...
I0526 11:57:16.686811   22231 shared_informer.go:311] Waiting for caches to sync for resource quota
I0526 11:57:16.686854   22231 shared_informer.go:318] Caches are synced for resource quota
I0526 11:57:16.803694   22231 shared_informer.go:311] Waiting for caches to sync for garbage collector
I0526 11:57:16.803737   22231 shared_informer.go:318] Caches are synced for garbage collector
namespace/my-namespace condition met
Successful
(Bmessage:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1515: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BI0526 11:57:17.982123   22231 horizontal.go:512] "Horizontal Pod Autoscaler has been deleted" HPA="namespace-1685102218-12323/busybox0"
I0526 11:57:17.986337   22231 horizontal.go:512] "Horizontal Pod Autoscaler has been deleted" HPA="namespace-1685102218-12323/busybox1"
W0526 11:57:18.128758   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:18.128802   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1685101998-27997" deleted
namespace "namespace-1685101999-16503" deleted
... skipping 31 lines ...
namespace "namespace-1685102184-23218" deleted
namespace "namespace-1685102184-29719" deleted
namespace "namespace-1685102185-9671" deleted
namespace "namespace-1685102187-4757" deleted
namespace "namespace-1685102189-25986" deleted
namespace "namespace-1685102218-12323" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:Warning: deleting cluster-scoped resources
Successful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1685101998-27997" deleted
... skipping 32 lines ...
namespace "namespace-1685102184-23218" deleted
namespace "namespace-1685102184-29719" deleted
namespace "namespace-1685102185-9671" deleted
namespace "namespace-1685102187-4757" deleted
namespace "namespace-1685102189-25986" deleted
namespace "namespace-1685102218-12323" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1522: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1523: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name "test-quota" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
resourcequota/test-quota created (server dry run)
... skipping 7 lines ...
I0526 11:57:18.748151   41307 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds
I0526 11:57:18.754195   41307 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/quotas/resourcequotas?limit=500 200 OK in 2 milliseconds
I0526 11:57:18.757119   41307 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/quotas/resourcequotas/test-quota 200 OK in 1 milliseconds
(BI0526 11:57:18.902476   22231 resource_quota_controller.go:337] "Resource quota has been deleted" key="quotas/test-quota"
resourcequota "test-quota" deleted
namespace "quotas" deleted
W0526 11:57:19.063791   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:19.063826   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 11:57:20.785034   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:20.785075   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1544: Successful get namespaces {{range.items}}{{ if eq .metadata.name "other" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1548: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1552: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1556: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1558: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
(Bmessage:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1565: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1569: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 109 lines ...
(Bsecret "test-secret" deleted
core.sh:871: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:875: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:876: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
(Bsecret "test-secret" deleted
W0526 11:57:32.179194   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:32.179237   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:886: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:889: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:890: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(Bsecret "test-secret" deleted
secret/test-secret created
... skipping 5 lines ...
(Bcore.sh:920: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:921: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:930: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
W0526 11:57:34.759075   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:34.759106   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0526 11:57:35.218245   22231 namespace_controller.go:182] "Namespace has been deleted" namespace="other"
W0526 11:57:38.603751   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:38.603786   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 30 lines ...
I0526 11:57:40.340788   42482 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/events?fieldSelector=involvedObject.name%3Dtest-binary-configmap%2CinvolvedObject.namespace%3Dtest-configmaps%2CinvolvedObject.kind%3DConfigMap%2CinvolvedObject.uid%3D90f861aa-c18e-49da-9caa-7c564f6aec8a&limit=500 200 OK in 1 milliseconds
I0526 11:57:40.342651   42482 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/configmaps/test-configmap 200 OK in 1 milliseconds
I0526 11:57:40.343906   42482 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-configmaps/events?fieldSelector=involvedObject.namespace%3Dtest-configmaps%2CinvolvedObject.kind%3DConfigMap%2CinvolvedObject.uid%3De12808f8-ea6d-4d68-9eff-e90c62ca5f30%2CinvolvedObject.name%3Dtest-configmap&limit=500 200 OK in 1 milliseconds
(Bconfigmap "test-configmap" deleted
configmap "test-binary-configmap" deleted
namespace "test-configmaps" deleted
W0526 11:57:42.573091   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:57:42.573128   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0526 11:57:43.682806   22231 namespace_controller.go:182] "Namespace has been deleted" namespace="test-secrets"
+++ exit code: 0
Recording: run_client_config_tests
Running command: run_client_config_tests

+++ Running case: test-cmd.run_client_config_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_client_config_tests
+++ [0526 11:57:45] Creating namespace namespace-1685102265-973
namespace/namespace-1685102265-973 created
Context "test" modified.
+++ [0526 11:57:45] Testing client config
Successful
(Bmessage:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
(Bmessage:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
(Bmessage:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
(Bmessage:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
(Bmessage:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
(Bmessage:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
(Bmessage:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "vendor/k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
(Bmessage:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 57 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 56 lines ...
                  job-name=test-job
Annotations:      cronjob.kubernetes.io/instantiate: manual
Parallelism:      1
Completions:      1
Completion Mode:  NonIndexed
Start Time:       Fri, 26 May 2023 11:57:54 +0000
Pods Statuses:    1 Active (0 Ready) / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  batch.kubernetes.io/controller-uid=cec2efc2-d7b9-498d-a2c7-ba3f9cfaf6a1
           batch.kubernetes.io/job-name=test-job
           controller-uid=cec2efc2-d7b9-498d-a2c7-ba3f9cfaf6a1
           job-name=test-job
  Containers:
... skipping 67 lines ...
(Bmessage:[perl -Mbignum=bpi -wle print bpi(10)]
has:perl -Mbignum=bpi -wle print bpi(10)
job.batch "my-pi" deleted
I0526 11:58:00.877238   22231 job_controller.go:523] enqueueing job namespace-1685102280-26128/my-pi
cronjob.batch "test-pi" deleted
+++ exit code: 0
W0526 11:58:00.979212   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:58:00.979246   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_pod_templates_tests
Running command: run_pod_templates_tests

+++ Running case: test-cmd.run_pod_templates_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_pod_templates_tests
... skipping 7 lines ...
core.sh:1635: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(BNAME    CONTAINERS   IMAGES   POD LABELS
nginx   nginx        nginx    name=nginx
core.sh:1643: Successful get podtemplates {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(Bquery for podtemplates had limit param
query for events had limit param
W0526 11:58:01.865450   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:58:01.865493   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
query for podtemplates had user-specified limit param
Successful describe podtemplates verbose logs:
I0526 11:58:01.815688   43784 loader.go:395] Config loaded from file:  /tmp/tmp.B29w6CjUov/.kube/config
I0526 11:58:01.820950   43784 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
I0526 11:58:01.827926   43784 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1685102281-16238/podtemplates?limit=500 200 OK in 1 milliseconds
I0526 11:58:01.830497   43784 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1685102281-16238/podtemplates/nginx 200 OK in 1 milliseconds
... skipping 368 lines ...
  type: ClusterIP
status:
  loadBalancer: {}
Successful
(Bmessage:kubectl-create kubectl-set
has:kubectl-set
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1034: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
(Bmessage:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:1047: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:1054: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1058: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BI0526 11:58:04.962644   22231 namespace_controller.go:182] "Namespace has been deleted" namespace="test-jobs"
... skipping 305 lines ...
(Bmessage:daemonset.apps/bind 
REVISION  CHANGE-CAUSE
2         kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
3         kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
has:3         kubectl apply
Successful
(Bmessage:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:122: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:2.0:
(Bapps.sh:123: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
apps.sh:126: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:latest:
(Bapps.sh:127: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
... skipping 60 lines ...
Namespace:    namespace-1685102293-29989
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1685102293-29989
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1685102293-29989
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1685102293-29989
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1685102293-29989
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1685102293-29989
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1685102293-29989
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1685102293-29989
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 25 lines ...
(Bcore.sh:1240: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0526 11:58:15.893470   22231 replica_set.go:220] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1685102293-29989  ffebf5a5-e616-4b92-ac7f-55ca43c5feb1 2271 2 2023-05-26 11:58:14 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] [] [{kubectl Update v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kube-controller-manager Update v1 2023-05-26 11:58:14 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kubectl-create Update v1 2023-05-26 11:58:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] [] []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}] []} [] [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0022336d8 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil> [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0526 11:58:15.898832   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-swnrz"
core.sh:1244: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1248: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1252: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1256: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0526 11:58:16.365697   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-wsbxh"
core.sh:1260: Successful get rc frontend {{.spec.replicas}}: 3
(BW0526 11:58:16.448022   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:58:16.448069   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1264: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0526 11:58:16.591907   22231 replica_set.go:220] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1685102293-29989  ffebf5a5-e616-4b92-ac7f-55ca43c5feb1 2283 4 2023-05-26 11:58:14 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] [] [{kubectl Update v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kubectl-create Update v1 2023-05-26 11:58:14 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update v1 2023-05-26 11:58:16 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] [] []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}] []} [] [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001554038 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil> [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0526 11:58:16.597090   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-wsbxh"
core.sh:1268: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller "frontend" deleted
... skipping 8 lines ...
has:replicationcontroller/redis-master scaled (dry run)
Successful
(Bmessage:replicationcontroller/redis-master scaled (dry run)
replicationcontroller/redis-slave scaled (dry run)
has:replicationcontroller/redis-slave scaled (dry run)
core.sh:1280: Successful get rc redis-master {{.spec.replicas}}: 1
(BW0526 11:58:17.396407   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:58:17.396465   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1281: Successful get rc redis-slave {{.spec.replicas}}: 2
(BSuccessful
(Bmessage:replicationcontroller/redis-master scaled (server dry run)
replicationcontroller/redis-slave scaled (server dry run)
has:replicationcontroller/redis-master scaled (server dry run)
Successful
... skipping 45 lines ...
I0526 11:58:19.305765   19392 alloc.go:330] "allocated clusterIPs" service="namespace-1685102293-29989/expose-test-deployment" clusterIPs={"IPv4":"10.0.0.103"}
Successful
(Bmessage:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
(Bmessage:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0526 11:58:19.673404   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-7df65dc9f4 to 3"
I0526 11:58:19.677561   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-682rg"
I0526 11:58:19.680826   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-tz69w"
I0526 11:58:19.684468   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-7zrm5"
... skipping 24 lines ...
(Bpod "valid-pod" deleted
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
Successful
(Bmessage:error: cannot expose a Node
has:cannot expose
Successful
(Bmessage:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
I0526 11:58:21.561062   19392 alloc.go:330] "allocated clusterIPs" service="namespace-1685102293-29989/kubernetes-serve-hostname-testing-sixty-three-characters-in-len" clusterIPs={"IPv4":"10.0.0.59"}
Successful
... skipping 32 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1436: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1440: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1449: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 24 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0526 11:58:24.443171   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-5f79767bf9 to 3"
I0526 11:58:24.454274   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment-resources-5f79767bf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f79767bf9-ssm6t"
I0526 11:58:24.459291   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment-resources-5f79767bf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f79767bf9-kb6x9"
I0526 11:58:24.460290   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment-resources-5f79767bf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f79767bf9-rb7cr"
core.sh:1455: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1456: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bcore.sh:1457: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0526 11:58:24.762175   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-77d775b4f9 to 1"
I0526 11:58:24.766638   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment-resources-77d775b4f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-77d775b4f9-n2lsp"
core.sh:1460: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1461: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0526 11:58:25.085743   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-resources-5f79767bf9 to 2 from 3"
I0526 11:58:25.092162   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment-resources-5f79767bf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-resources-5f79767bf9-ssm6t"
I0526 11:58:25.096299   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-688f8b78b5 to 1 from 0"
I0526 11:58:25.100744   22231 event.go:307] "Event occurred" object="namespace-1685102293-29989/nginx-deployment-resources-688f8b78b5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-688f8b78b5-mfdlc"
core.sh:1466: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 155 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1477: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1478: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1479: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 46 lines ...
                pod-template-hash=859689d794
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=859689d794
  Containers:
   nginx:
    Image:        registry.k8s.io/nginx:test-cmd
... skipping 123 lines ...
apps.sh:340: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(B    Image:	registry.k8s.io/nginx:test-cmd
deployment.apps/nginx rolled back (server dry run)
apps.sh:344: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
apps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
apps.sh:355: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0526 11:58:35.024975   22231 event.go:307] "Event occurred" object="namespace-1685102306-3347/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-6b9cd9ccf6 to 0 from 1"
I0526 11:58:35.034034   22231 event.go:307] "Event occurred" object="namespace-1685102306-3347/nginx-6b9cd9ccf6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-6b9cd9ccf6-9k4n6"
I0526 11:58:35.040669   22231 event.go:307] "Event occurred" object="namespace-1685102306-3347/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-66bcd4454 to 1 from 0"
I0526 11:58:35.044248   22231 event.go:307] "Event occurred" object="namespace-1685102306-3347/nginx-66bcd4454" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-66bcd4454-bvhdm"
Successful
... skipping 79 lines ...
apps.sh:398: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bapps.sh:399: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0526 11:58:37.535335   22231 event.go:307] "Event occurred" object="namespace-1685102306-3347/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-6444b54576 to 1"
I0526 11:58:37.539240   22231 event.go:307] "Event occurred" object="namespace-1685102306-3347/nginx-deployment-6444b54576" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-6444b54576-mgpwt"
apps.sh:402: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(BW0526 11:58:37.689476   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:58:37.689526   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:403: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:408: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bapps.sh:409: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:412: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bapps.sh:413: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
... skipping 55 lines ...
I0526 11:58:40.999068   22231 event.go:307] "Event occurred" object="namespace-1685102306-3347/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-57bf7fbc68-rvht9"
Warning: key password transferred to PASSWORD
Warning: key username transferred to USERNAME
deployment.apps/nginx-deployment env updated
deployment.apps/nginx-deployment env updated
Successful
(Bmessage:error: standard input cannot be used for multiple arguments
has:standard input cannot be used for multiple arguments
deployment.apps "nginx-deployment" deleted
E0526 11:58:41.294101   22231 replica_set.go:544] sync "namespace-1685102306-3347/nginx-deployment-6bdc9df444" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-6bdc9df444": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1685102306-3347/nginx-deployment-6bdc9df444, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 15105452-7a72-4c5a-9ba3-3ae14ae73451, UID in object meta: 
configmap "test-set-env-config" deleted
E0526 11:58:41.397601   22231 replica_set.go:544] sync "namespace-1685102306-3347/nginx-deployment-5446b4888c" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5446b4888c": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1685102306-3347/nginx-deployment-5446b4888c, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 9016841e-6ccf-495b-b55b-15a0ef744579, UID in object meta: 
secret "test-set-env-secret" deleted
apps.sh:474: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0526 11:58:41.544555   22231 replica_set.go:544] sync "namespace-1685102306-3347/nginx-deployment-56795f96bc" failed with replicasets.apps "nginx-deployment-56795f96bc" not found
E0526 11:58:41.594925   22231 replica_set.go:544] sync "namespace-1685102306-3347/nginx-deployment-ffc86458c" failed with replicasets.apps "nginx-deployment-ffc86458c" not found
E0526 11:58:41.644763   22231 replica_set.go:544] sync "namespace-1685102306-3347/nginx-deployment-57bf7fbc68" failed with replicasets.apps "nginx-deployment-57bf7fbc68" not found
deployment.apps/nginx-deployment created
I0526 11:58:41.702768   22231 event.go:307] "Event occurred" object="namespace-1685102306-3347/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-57bf7fbc68 to 3"
I0526 11:58:41.707620   22231 event.go:307] "Event occurred" object="namespace-1685102306-3347/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-4z4r6"
apps.sh:477: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
(BI0526 11:58:41.796889   22231 event.go:307] "Event occurred" object="namespace-1685102306-3347/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-4vw99"
I0526 11:58:41.847692   22231 event.go:307] "Event occurred" object="namespace-1685102306-3347/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-qhhth"
... skipping 193 lines ...
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>
has:registry.k8s.io/perl
deployment.apps "nginx-deployment" deleted
+++ exit code: 0
E0526 11:58:42.544158   22231 replica_set.go:544] sync "namespace-1685102306-3347/nginx-deployment-6444b54576" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-6444b54576": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1685102306-3347/nginx-deployment-6444b54576, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: da61e6b2-e2ac-4c36-8e9a-7e4f42119cfc, UID in object meta: 
Recording: run_rs_tests
Running command: run_rs_tests

+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
+++ [0526 11:58:42] Creating namespace namespace-1685102322-15901
E0526 11:58:42.595563   22231 replica_set.go:544] sync "namespace-1685102306-3347/nginx-deployment-57bf7fbc68" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-57bf7fbc68": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1685102306-3347/nginx-deployment-57bf7fbc68, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 0a51b717-b90b-49ae-bffd-58eaf1f0e930, UID in object meta: 
namespace/namespace-1685102322-15901 created
Context "test" modified.
+++ [0526 11:58:42] Testing kubectl(v1:replicasets)
apps.sh:645: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0526 11:58:43.005178   22231 event.go:307] "Event occurred" object="namespace-1685102322-15901/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-rws75"
+++ [0526 11:58:43] Deleting rs
I0526 11:58:43.009689   22231 event.go:307] "Event occurred" object="namespace-1685102322-15901/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-hsvt5"
I0526 11:58:43.010550   22231 event.go:307] "Event occurred" object="namespace-1685102322-15901/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-jj4h2"
replicaset.apps "frontend" deleted
E0526 11:58:43.094587   22231 replica_set.go:544] sync "namespace-1685102322-15901/frontend" failed with Operation cannot be fulfilled on replicasets.apps "frontend": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1685102322-15901/frontend, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 6a19bf87-b6c2-46f4-bc97-e13608263810, UID in object meta: 
apps.sh:651: Successful get pods -l tier=frontend {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:655: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0526 11:58:43.459861   22231 event.go:307] "Event occurred" object="namespace-1685102322-15901/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-q8k8v"
I0526 11:58:43.464079   22231 event.go:307] "Event occurred" object="namespace-1685102322-15901/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-zvnnm"
I0526 11:58:43.464108   22231 event.go:307] "Event occurred" object="namespace-1685102322-15901/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-fhv4p"
apps.sh:659: Successful get pods -l tier=frontend {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0526 11:58:43] Deleting rs
replicaset.apps "frontend" deleted
E0526 11:58:43.694085   22231 replica_set.go:544] sync "namespace-1685102322-15901/frontend" failed with replicasets.apps "frontend" not found
apps.sh:663: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:665: Successful get pods -l tier=frontend {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-fhv4p" deleted
pod "frontend-q8k8v" deleted
pod "frontend-zvnnm" deleted
apps.sh:668: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
Namespace:    namespace-1685102322-15901
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1685102322-15901
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1685102322-15901
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1685102322-15901
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1685102322-15901
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1685102322-15901
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1685102322-15901
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1685102322-15901
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 168 lines ...
(Bapps.sh:731: Successful get deploy scale-2 {{.spec.replicas}}: 3
(Bapps.sh:732: Successful get deploy scale-3 {{.spec.replicas}}: 3
(Breplicaset.apps "frontend" deleted
deployment.apps "scale-1" deleted
deployment.apps "scale-2" deleted
deployment.apps "scale-3" deleted
W0526 11:58:47.916885   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:58:47.916933   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps/frontend created
I0526 11:58:48.074535   22231 event.go:307] "Event occurred" object="namespace-1685102322-15901/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-kh52h"
I0526 11:58:48.079081   22231 event.go:307] "Event occurred" object="namespace-1685102322-15901/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-q8555"
I0526 11:58:48.079113   22231 event.go:307] "Event occurred" object="namespace-1685102322-15901/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-ks2z9"
apps.sh:740: Successful get rs frontend {{.spec.replicas}}: 3
(BI0526 11:58:48.224553   19392 alloc.go:330] "allocated clusterIPs" service="namespace-1685102322-15901/frontend" clusterIPs={"IPv4":"10.0.0.165"}
... skipping 46 lines ...
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:808: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80
(BSuccessful
(Bmessage:kubectl-autoscale
has:kubectl-autoscale
horizontalpodautoscaler.autoscaling "frontend" deleted
error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 240 lines ...
apps.sh:554: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.8:
(Bapps.sh:555: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/pause:2.0:
(Bapps.sh:556: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps/nginx rolled back
apps.sh:559: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.7:
(Bapps.sh:560: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BW0526 11:58:55.356620   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:58:55.356658   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:statefulset.apps/nginx 
REVISION  CHANGE-CAUSE
2         kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
3         kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
has:statefulset.apps/nginx
... skipping 13 lines ...
(Bmessage:statefulset.apps/nginx 
REVISION  CHANGE-CAUSE
2         kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
3         kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
has:3         kubectl apply
Successful
(Bmessage:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:570: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.7:
(Bapps.sh:571: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:574: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.8:
(Bapps.sh:575: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/pause:2.0:
... skipping 87 lines ...
Name:         mock
Namespace:    namespace-1685102336-1443
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 61 lines ...
Name:         mock
Namespace:    namespace-1685102336-1443
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 61 lines ...
Name:         mock
Namespace:    namespace-1685102336-1443
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 26 lines ...
generic-resources.sh:153: Successful get services mock {{.metadata.annotations.annotated}}: true
(Bgeneric-resources.sh:159: Successful get rc mock {{.metadata.annotations.annotated}}: true
(Bservice "mock" deleted
replicationcontroller "mock" deleted
Testing with file hack/testdata/multi-resource-rclist.json and replace with file hack/testdata/multi-resource-rclist-modify.json
generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0526 11:59:03.432986   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:59:03.433020   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/mock created
replicationcontroller/mock2 created
I0526 11:59:03.704867   22231 event.go:307] "Event occurred" object="namespace-1685102336-1443/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-mwbnr"
I0526 11:59:03.707739   22231 event.go:307] "Event occurred" object="namespace-1685102336-1443/mock2" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock2-ztnmj"
generic-resources.sh:78: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock:mock2:
... skipping 4 lines ...
Namespace:    namespace-1685102336-1443
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1685102336-1443
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 115 lines ...
+++ [0526 11:59:08] Creating namespace namespace-1685102348-21088
namespace/namespace-1685102348-21088 created
Context "test" modified.
+++ [0526 11:59:08] Testing persistent volumes
storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0526 11:59:08.954460   22231 pv_protection_controller.go:113] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(Bpersistentvolume "pv0001" deleted
persistentvolume/pv0002 created
E0526 11:59:09.463078   22231 pv_protection_controller.go:113] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
(Bpersistentvolume "pv0002" deleted
persistentvolume/pv0003 created
E0526 11:59:09.946493   22231 pv_protection_controller.go:113] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
(Bquery for persistentvolumes had limit param
query for events had limit param
W0526 11:59:10.184315   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:59:10.184353   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
query for persistentvolumes had user-specified limit param
Successful describe persistentvolumes verbose logs:
I0526 11:59:10.077735   53742 loader.go:395] Config loaded from file:  /tmp/tmp.B29w6CjUov/.kube/config
I0526 11:59:10.083228   53742 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
I0526 11:59:10.090273   53742 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes?limit=500 200 OK in 1 milliseconds
I0526 11:59:10.092829   53742 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes/pv0003 200 OK in 1 milliseconds
... skipping 98 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 11:53:16 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 34 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 11:53:16 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 35 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 11:53:16 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 11:53:16 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 42 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 11:53:16 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 34 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 11:53:16 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 34 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 11:53:16 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 30 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 11:53:16 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 11:53:16 +0000   Fri, 26 May 2023 11:54:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 198 lines ...
yes
has:the server doesn't have a resource type
Successful
(Bmessage:yes
has:yes
Successful
(Bmessage:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
(BSuccessful
(Bmessage:yes
0
has:0
... skipping 62 lines ...
		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
legacy-script.sh:893: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
(Blegacy-script.sh:894: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
(Blegacy-script.sh:895: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
(Blegacy-script.sh:896: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
(BSuccessful
(Bmessage:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
has:only rbac.authorization.k8s.io/v1 is supported
rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
role.rbac.authorization.k8s.io "testing-R" deleted
Warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
... skipping 24 lines ...
discovery.sh:236: Successful get all -l app=cassandra {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}: cassandra:cassandra:cassandra:cassandra:
(Bpod "cassandra-dqcqd" deleted
I0526 11:59:23.267492   22231 event.go:307] "Event occurred" object="namespace-1685102362-30664/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-dtv95"
pod "cassandra-fp95k" deleted
I0526 11:59:23.278825   22231 event.go:307] "Event occurred" object="namespace-1685102362-30664/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-g7mrf"
replicationcontroller "cassandra" deleted
E0526 11:59:23.283088   22231 replica_set.go:544] sync "namespace-1685102362-30664/cassandra" failed with replicationcontrollers "cassandra" not found
service "cassandra" deleted
+++ exit code: 0
Recording: run_kubectl_explain_tests
Running command: run_kubectl_explain_tests

+++ Running case: test-cmd.run_kubectl_explain_tests 
... skipping 127 lines ...
Successful
(Bmessage:customresourcedefinition.apiextensions.k8s.io/examples.test.com created
has:created
Successful
(Bmessage:example.test.com/test created
has:created
W0526 11:59:26.970792   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:59:26.970835   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0526 11:59:27.036866   19392 handler.go:232] Adding GroupVersion test.com v1 to ResourceManager
I0526 11:59:27.041255   19392 handler.go:232] Adding GroupVersion test.com v1 to ResourceManager
Successful
(Bmessage:customresourcedefinition.apiextensions.k8s.io "examples.test.com" deleted
has:deleted
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
... skipping 338 lines ...
namespace-1685102362-30664   default   0         13s
namespace-1685102364-22955   default   0         11s
some-other-random            default   0         14s
has:all-ns-test-2
namespace "all-ns-test-1" deleted
namespace "all-ns-test-2" deleted
W0526 11:59:41.439047   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 11:59:41.439085   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0526 11:59:45.481555   22231 namespace_controller.go:182] "Namespace has been deleted" namespace="all-ns-test-1"
get.sh:442: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
get.sh:446: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bget.sh:450: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1:
... skipping 19 lines ...
(Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind
No resources found in namespace-1685102364-22955 namespace.
has:example.com/v1beta1 DeprecatedKind is deprecated
Successful
(Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind
No resources found in namespace-1685102364-22955 namespace.
error: 1 warning received
has:example.com/v1beta1 DeprecatedKind is deprecated
Successful
(Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind
No resources found in namespace-1685102364-22955 namespace.
error: 1 warning received
has:error: 1 warning received
I0526 11:59:46.618752   19392 handler.go:232] Adding GroupVersion example.com v1 to ResourceManager
I0526 11:59:46.618813   19392 handler.go:232] Adding GroupVersion example.com v1beta1 to ResourceManager
customresourcedefinition.apiextensions.k8s.io "deprecated.example.com" deleted
I0526 11:59:46.623382   19392 handler.go:232] Adding GroupVersion example.com v1 to ResourceManager
I0526 11:59:46.623430   19392 handler.go:232] Adding GroupVersion example.com v1beta1 to ResourceManager
+++ exit code: 0
... skipping 568 lines ...
node/127.0.0.1 cordoned (server dry run)
Warning: deleting Pods that declare no controller: namespace-1685102394-16565/test-pod-1
evicting pod namespace-1685102394-16565/test-pod-1 (server dry run)
node/127.0.0.1 drained (server dry run)
node-management.sh:140: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2,
(BWarning: deleting Pods that declare no controller: namespace-1685102394-16565/test-pod-1
W0526 12:00:00.574113   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 12:00:00.574146   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 12:00:00.841817   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 12:00:00.841856   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 12:00:15.317052   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 12:00:15.317085   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 12:00:25.510187   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 12:00:25.510224   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:node/127.0.0.1 cordoned
evicting pod namespace-1685102394-16565/test-pod-1
pod "test-pod-1" has DeletionTimestamp older than 1 seconds, skipping
node/127.0.0.1 drained
has:evicting pod .*/test-pod-1
... skipping 14 lines ...
(Bmessage:node/127.0.0.1 already uncordoned (server dry run)
has:already uncordoned
node-management.sh:161: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode/127.0.0.1 labeled
node-management.sh:166: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(BSuccessful
(Bmessage:error: cannot specify both a node name and a --selector option
See 'kubectl drain -h' for help and examples
has:cannot specify both a node name
node-management.sh:172: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(Bnode-management.sh:174: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode-management.sh:176: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2,
(BSuccessful
... skipping 78 lines ...
Warning: deleting Pods that declare no controller: namespace-1685102394-16565/test-pod-1, namespace-1685102394-16565/test-pod-2
evicting pod namespace-1685102394-16565/test-pod-1 (dry run)
evicting pod namespace-1685102394-16565/test-pod-2 (dry run)
node/127.0.0.1 drained (dry run)
has:/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&limit=500 200 OK
Successful
(Bmessage:error: USAGE: cordon NODE [flags]
See 'kubectl cordon -h' for help and examples
has:error\: USAGE\: cordon NODE
node/127.0.0.1 already uncordoned
Successful
(Bmessage:error: You must provide one or more resources by argument or filename.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
   '<resource> <name>'
   '<resource>'
has:must provide one or more resources
... skipping 18 lines ...
+++ [0526 12:00:32] Testing kubectl plugins
Successful
(Bmessage:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/version/kubectl-version
  - warning: kubectl-version overwrites existing command: "kubectl version"
error: one plugin warning was found
has:kubectl-version overwrites existing command: "kubectl version"
Successful
(Bmessage:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
  - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
error: one plugin warning was found
has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
Successful
(Bmessage:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
has:plugins are available
Successful
(Bmessage:Unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
error: unable to find any kubectl plugins in your PATH
has:unable to find any kubectl plugins in your PATH
Successful
(Bmessage:I am plugin foo
has:plugin foo
Successful
(Bmessage:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1
... skipping 13 lines ...

+++ Running case: test-cmd.run_impersonation_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_impersonation_tests
+++ [0526 12:00:32] Testing impersonation
Successful
(Bmessage:error: requesting uid, groups or user-extra for test-admin without impersonating a user
has:without impersonating a user
Successful
(Bmessage:error: requesting uid, groups or user-extra for test-admin without impersonating a user
has:without impersonating a user
certificatesigningrequest.certificates.k8s.io/foo created
authorization.sh:60: Successful get csr/foo {{.spec.username}}: user1
(Bauthorization.sh:61: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificatesigningrequest.certificates.k8s.io/foo created
... skipping 19 lines ...
I0526 12:00:34.210984   22231 event.go:307] "Event occurred" object="namespace-1685102434-17271/test-1" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-1-7697bf65f7 to 1"
I0526 12:00:34.217751   22231 event.go:307] "Event occurred" object="namespace-1685102434-17271/test-1-7697bf65f7" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-1-7697bf65f7-6m2nv"
deployment.apps/test-2 created
I0526 12:00:34.285999   22231 event.go:307] "Event occurred" object="namespace-1685102434-17271/test-2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-2-675f68f47d to 1"
I0526 12:00:34.289871   22231 event.go:307] "Event occurred" object="namespace-1685102434-17271/test-2-675f68f47d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-2-675f68f47d-hg846"
wait.sh:36: Successful get deployments {{range .items}}{{.metadata.name}},{{end}}: test-1,test-2,
(BW0526 12:00:47.057107   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 12:00:47.057142   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 12:00:58.647717   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 12:00:58.647758   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:error: timed out waiting for the condition on deployments/test-1
has:timed out
deployment.apps "test-1" deleted
deployment.apps "test-2" deleted
Successful
(Bmessage:deployment.apps/test-1 condition met
deployment.apps/test-2 condition met
... skipping 27 lines ...
+++ command: run_kubectl_debug_pod_tests
+++ [0526 12:01:08] Creating namespace namespace-1685102468-4826
namespace/namespace-1685102468-4826 created
Context "test" modified.
+++ [0526 12:01:08] Testing kubectl debug (pod tests)
pod/target created
W0526 12:01:08.393032   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 12:01:08.393067   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
debug.sh:32: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target:
(Bdebug.sh:36: Successful get pod/target {{range.spec.ephemeralContainers}}{{.name}}:{{end}}: debug-container:
(Bpod "target" deleted
pod/target created
debug.sh:44: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target:
(Bdebug.sh:48: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: target:target-copy:
... skipping 132 lines ...
(Bdebug.sh:369: Successful get pod -n namespace-restricted {{range.items}}{{.metadata.name}}:{{end}}: target:target-copy:
(Bdebug.sh:370: Successful get pod/target-copy -n namespace-restricted {{range.spec.containers}}{{.name}}:{{end}}: target:debug-container:
(Bdebug.sh:371: Successful get pod/target-copy -n namespace-restricted {{range.spec.containers}}{{.image}}:{{end}}: busybox:busybox:
(Bpod "target" deleted
pod "target-copy" deleted
namespace "namespace-restricted" deleted
W0526 12:01:21.892814   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 12:01:21.892848   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_kubectl_debug_node_tests
Running command: run_kubectl_debug_node_tests

+++ Running case: test-cmd.run_kubectl_debug_node_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 126 lines ...
(Bdebug.sh:440: Successful get pod/node-debugger-127.0.0.1-4fmwr -n namespace-restricted {{if (index (index .spec.containers 0) "securityContext" "capabilities" "add") }}:{{end}}: 
(Bdebug.sh:441: Successful get pod/node-debugger-127.0.0.1-4fmwr -n namespace-restricted {{index .spec.containers 0 "securityContext" "runAsNonRoot"}}: true
(Bdebug.sh:442: Successful get pod/node-debugger-127.0.0.1-4fmwr -n namespace-restricted {{index .spec.containers 0 "securityContext" "seccompProfile" "type"}}: RuntimeDefault
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "node-debugger-127.0.0.1-4fmwr" force deleted
namespace "namespace-restricted" deleted
W0526 12:01:31.015870   22231 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 12:01:31.015905   22231 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
No resources found
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
No resources found
+++ [0526 12:01:34] TESTS PASSED
... skipping 3389 lines ...
{"Time":"2023-05-26T12:11:41.310927776Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestPodSecurity","Output":"\t- \t\t\t\tFieldsV1:   s`{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:apf.kubernetes.io/auto`...,\n"}
{"Time":"2023-05-26T12:11:41.311264191Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestPodSecurity","Output":"\t- \t\t\t\tAPIVersion:  \"flowcontrol.apiserver.k8s.io/v1beta3\",\n"}
{"Time":"2023-05-26T12:11:41.311326375Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestPodSecurity","Output":"\t- \t\t\t\tAPIVersion: \"flowcontrol.apiserver.k8s.io/v1beta3\",\n"}
{"Time":"2023-05-26T12:11:41.311351281Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestPodSecurity","Output":"\t- \t\t\t\tFieldsV1:   s`{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:apf.kubernetes.io/auto`...,\n"}
{"Time":"2023-05-26T12:11:41.314768766Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestPodSecurity","Output":"\t- \t\t\t\tAPIVersion:  \"flowcontrol.apiserver.k8s.io/v1beta3\",\n"}
{"Time":"2023-05-26T12:11:41.314831945Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestPodSecurity","Output":"\t- \t\t\t\tAPIVersion: \"flowcontrol.apiserver.k8s.io/v1beta3\",\n"}
{"Time":"2023-05-26T12:11:41.314847722Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestPodSecurity","Output":"\t- \t\t\t\tFieldsV1:   s`{\"{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2023-05-26T12:11:49Z"}
++ early_exit_handler
++ '[' -n 181 ']'
++ kill -TERM 181
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 4 lines ...