This job view page is being replaced by Spyglass soon. Check out the new job view.
PRAxeZhan: [Scheduler] Make sure handlers have synced before scheduling
ResultABORTED
Tests 0 failed / 140 succeeded
Started2023-03-18 06:15
Elapsed12m42s
Revision7ea1101f0c816aa915dcbb05d57fd4f4ed749839
Refs 116729

No Test Failures!


Show 140 Passed Tests

Error lines from build-log.txt

... skipping 49 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 165: bogus-expected-to-fail: command not found
!!! [0318 06:15:52] Call tree:
!!! [0318 06:15:52]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0318 06:15:52]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0318 06:15:52]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:141 juLog(...)
!!! [0318 06:15:52]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:169 record_command(...)
!!! [0318 06:15:52]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0318 06:15:52] Running kubeadm tests
go version go1.20.2 linux/amd64
+++ [0318 06:15:56] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kubeadm (static)
go version go1.20.2 linux/amd64
+++ [0318 06:16:49] Running tests without code coverage 
... skipping 225 lines ...
I0318 06:19:19.830387   20087 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0318 06:19:19.830545   20087 crd_finalizer.go:266] Starting CRDFinalizer
I0318 06:19:19.842963   20087 crdregistration_controller.go:111] Starting crd-autoregister controller
I0318 06:19:19.843274   20087 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
W0318 06:19:19.844055   20087 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0318 06:19:19.844374   20087 gc_controller.go:78] Starting apiserver lease garbage collector
E0318 06:19:19.919610   20087 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
I0318 06:19:19.924629   20087 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
I0318 06:19:19.924712   20087 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0318 06:19:19.924832   20087 cache.go:39] Caches are synced for AvailableConditionController controller
I0318 06:19:19.924878   20087 cache.go:39] Caches are synced for autoregister controller
I0318 06:19:19.924858   20087 shared_informer.go:318] Caches are synced for configmaps
I0318 06:19:19.926405   20087 controller.go:624] quota admission added evaluator for: namespaces
... skipping 16 lines ...
go version go1.20.2 linux/amd64
+++ [0318 06:19:23] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kube-controller-manager (static)
+++ [0318 06:20:01] Generate kubeconfig for controller-manager
+++ [0318 06:20:01] Starting controller-manager
I0318 06:20:02.366705   23139 serving.go:348] Generated self-signed cert in-memory
W0318 06:20:02.865832   23139 authentication.go:426] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0318 06:20:02.865886   23139 authentication.go:320] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0318 06:20:02.865899   23139 authentication.go:344] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0318 06:20:02.865928   23139 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0318 06:20:02.865946   23139 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0318 06:20:02.866373   23139 controllermanager.go:187] "Starting" version="v1.27.0-beta.0.24+7867a91812af48"
I0318 06:20:02.866401   23139 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0318 06:20:02.868393   23139 secure_serving.go:210] Serving securely on [::]:10257
I0318 06:20:02.868556   23139 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0318 06:20:02.868706   23139 leaderelection.go:245] attempting to acquire leader lease kube-system/kube-controller-manager...
... skipping 99 lines ...
I0318 06:20:02.916989   23139 replica_set.go:201] "Starting controller" name="replicaset"
I0318 06:20:02.917008   23139 shared_informer.go:311] Waiting for caches to sync for ReplicaSet
I0318 06:20:02.917494   23139 controllermanager.go:638] "Started controller" controller="endpointslice"
I0318 06:20:02.922460   23139 endpointslice_controller.go:252] Starting endpoint slice controller
I0318 06:20:02.922779   23139 shared_informer.go:311] Waiting for caches to sync for endpoint_slice
I0318 06:20:02.922968   23139 controllermanager.go:638] "Started controller" controller="replicationcontroller"
E0318 06:20:02.923192   23139 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
I0318 06:20:02.923220   23139 controllermanager.go:616] "Warning: skipping controller" controller="cloud-node-lifecycle"
I0318 06:20:02.923488   23139 replica_set.go:201] "Starting controller" name="replicationcontroller"
I0318 06:20:02.923519   23139 shared_informer.go:311] Waiting for caches to sync for ReplicationController
I0318 06:20:02.923618   23139 controllermanager.go:638] "Started controller" controller="persistentvolume-expander"
I0318 06:20:02.923834   23139 expand_controller.go:339] "Starting expand controller"
I0318 06:20:02.923859   23139 shared_informer.go:311] Waiting for caches to sync for expand
... skipping 15 lines ...
I0318 06:20:02.925703   23139 shared_informer.go:311] Waiting for caches to sync for stateful set
I0318 06:20:02.925749   23139 certificate_controller.go:112] Starting certificate controller "csrapproving"
I0318 06:20:02.925756   23139 shared_informer.go:311] Waiting for caches to sync for certificate-csrapproving
I0318 06:20:02.925825   23139 controllermanager.go:638] "Started controller" controller="endpointslicemirroring"
I0318 06:20:02.925975   23139 endpointslicemirroring_controller.go:211] Starting EndpointSliceMirroring controller
I0318 06:20:02.925990   23139 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
E0318 06:20:02.926229   23139 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
I0318 06:20:02.926253   23139 controllermanager.go:616] "Warning: skipping controller" controller="service"
I0318 06:20:02.926269   23139 core.go:224] "Will not configure cloud provider routes for allocate-node-cidrs" CIDRs=false routes=true
I0318 06:20:02.926280   23139 controllermanager.go:616] "Warning: skipping controller" controller="route"
I0318 06:20:02.926475   23139 controllermanager.go:638] "Started controller" controller="root-ca-cert-publisher"
I0318 06:20:02.926702   23139 controllermanager.go:638] "Started controller" controller="endpoint"
I0318 06:20:02.926899   23139 controllermanager.go:638] "Started controller" controller="ttl"
... skipping 80 lines ...
I0318 06:20:03.231204   23139 taint_manager.go:211] "Sending events to api server"
I0318 06:20:03.316924   23139 shared_informer.go:318] Caches are synced for service account
I0318 06:20:03.317119   23139 shared_informer.go:318] Caches are synced for resource quota
I0318 06:20:03.319197   20087 controller.go:624] quota admission added evaluator for: serviceaccounts
I0318 06:20:03.334674   23139 shared_informer.go:318] Caches are synced for resource quota
node/127.0.0.1 created
I0318 06:20:03.590715   23139 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"127.0.0.1\" does not exist"
+++ [0318 06:20:03] Checking kubectl version
I0318 06:20:03.654685   23139 shared_informer.go:318] Caches are synced for garbage collector
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.0-beta.0.24+7867a91812af48", GitCommit:"7867a91812af487389059d56f27175365d8cf42b", GitTreeState:"clean", BuildDate:"2023-03-17T23:59:16Z", GoVersion:"go1.20.2", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"27+", GitVersion:"v1.27.0-beta.0.24+7867a91812af48", GitCommit:"7867a91812af487389059d56f27175365d8cf42b", GitTreeState:"clean", BuildDate:"2023-03-17T23:59:16Z", GoVersion:"go1.20.2", Compiler:"gc", Platform:"linux/amd64"}
I0318 06:20:03.731270   23139 shared_informer.go:318] Caches are synced for garbage collector
I0318 06:20:03.731312   23139 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   42s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 196 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0318 06:20:09] Creating namespace namespace-1679120409-3215
namespace/namespace-1679120409-3215 created
Context "test" modified.
+++ [0318 06:20:09] Testing RESTMapper
+++ [0318 06:20:09] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 60 lines ...
namespace/namespace-1679120411-28842 created
Context "test" modified.
+++ [0318 06:20:11] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 152 lines ...
namespace/namespace-1679120420-26535 created
Context "test" modified.
+++ [0318 06:20:20] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 623 lines ...
has:valid-pod
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:214: Successful get pods -lname=valid-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name "test-kubectl-describe-pod" }}found{{end}}{{end}}:: :
... skipping 30 lines ...
I0318 06:20:39.150719   28287 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 1 milliseconds
I0318 06:20:39.152719   28287 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.name%3Dtest-pdb-2%2CinvolvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3D0610b42d-5d5a-4aee-9ac7-f9b357dea1d7&limit=500 200 OK in 1 milliseconds
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 242 lines ...
core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:3.9:
(BSuccessful
(Bmessage:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0318 06:20:56] "kubectl patch with resourceVersion 623" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
(Bmessage:kubectl-replace
has:kubectl-replace
Successful
(Bmessage:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
(Bmessage:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
I0318 06:20:57.430225   23139 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"node-v1-test\" does not exist"
node/node-v1-test created
core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(BI0318 06:20:58.240522   23139 event.go:307] "Event occurred" object="node-v1-test" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node node-v1-test event: Registered Node node-v1-test in Controller"
... skipping 31 lines ...
spec:
  containers:
  - image: registry.k8s.io/pause:3.9
    name: kubernetes-pause
has:localonlyvalue
core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 85 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0318 06:21:08] Creating namespace namespace-1679120468-22471
namespace/namespace-1679120468-22471 created
Context "test" modified.
+++ [0318 06:21:08] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 63 lines ...
	If true, keep the managedFields when printing objects in JSON or YAML format.

    --template='':
	Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].

    --validate='strict':
	Must be one of: strict (or true), warn, ignore (or false). 		"true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not. 		"warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise. 		"false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields.

    --windows-line-endings=false:
	Only relevant if --edit=true. Defaults to the line ending native to your platform.

Usage:
  kubectl create -f FILENAME [options]
... skipping 38 lines ...
I0318 06:21:11.642963   23139 event.go:307] "Event occurred" object="namespace-1679120468-15169/test-deployment-retainkeys-d65c44c97" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-d65c44c97-2296k"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
(Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/test-pod created (dry run)
pod/test-pod created (server dry run)
apply.sh:107: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 31 lines ...
(Bpod/b created
apply.sh:207: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:208: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
(Bmessage:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
pod/a created
pod/b created
I0318 06:21:21.847153   20087 alloc.go:330] "allocated clusterIPs" service="namespace-1679120468-15169/prune-svc" clusterIPs=map[IPv4:10.0.0.116]
service/prune-svc created
W0318 06:21:21.848033   32372 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.
... skipping 51 lines ...