This job view page is being replaced by Spyglass soon. Check out the new job view.
PRfatsheep9146: contextual logging cleanup
ResultABORTED
Tests 1 failed / 5026 succeeded
Started2023-05-26 02:44
Elapsed49m26s
Revision7e77cb3c1c205980bc64280702bd3a5744f3b11b
Refs 116930

Test Failures


k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration TestSubresourcePatch 0.00s

go test -v k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration -run TestSubresourcePatch$
=== RUN   TestSubresourcePatch
    testserver.go:250: Resolved testserver package path to: "/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/cmd/server/testing"
    testserver.go:139: runtime-config=map[api/all:true]
    testserver.go:140: Starting apiextensions-apiserver on port 40733...
I0526 03:30:29.418392   92324 serving.go:342] Generated self-signed cert (/tmp/apiextensions-apiserver3091671085/apiserver.crt, /tmp/apiextensions-apiserver3091671085/apiserver.key)
I0526 03:30:29.989222   92324 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionPolicy.
W0526 03:30:29.989507   92324 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0526 03:30:29.989530   92324 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0526 03:30:29.991576   92324 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0526 03:30:29.996379   92324 handler.go:232] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
W0526 03:30:29.996400   92324 genericapiserver.go:744] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
W0526 03:30:29.996523   92324 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
    testserver.go:161: Waiting for /healthz to be ok...
I0526 03:30:30.004599   92324 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/tmp/apiextensions-apiserver3091671085/apiserver.crt::/tmp/apiextensions-apiserver3091671085/apiserver.key"
I0526 03:30:30.005309   92324 secure_serving.go:210] Serving securely on 127.0.0.1:40733
I0526 03:30:30.005336   92324 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0526 03:30:30.005392   92324 customresource_discovery_controller.go:289] Starting DiscoveryController
I0526 03:30:30.005492   92324 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0526 03:30:30.005525   92324 apf_controller.go:361] Starting API Priority and Fairness config controller
I0526 03:30:30.005542   92324 naming_controller.go:291] Starting NamingConditionController
I0526 03:30:30.005556   92324 establishing_controller.go:76] Starting EstablishingController
I0526 03:30:30.005571   92324 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0526 03:30:30.005763   92324 crd_finalizer.go:266] Starting CRDFinalizer
W0526 03:30:30.005775   92324 reflector.go:538] k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "http://127.1.2.3:12345/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.1.2.3:12345: connect: connection refused
E0526 03:30:30.005823   92324 reflector.go:149] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "http://127.1.2.3:12345/api/v1/services?limit=500&resourceVersion=0": dial tcp 127.1.2.3:12345: connect: connection refused
W0526 03:30:30.005934   92324 reflector.go:538] k8s.io/client-go/informers/factory.go:150: failed to list *v1beta3.FlowSchema: Get "http://127.1.2.3:12345/apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas?limit=500&resourceVersion=0": dial tcp 127.1.2.3:12345: connect: connection refused
E0526 03:30:30.006015   92324 reflector.go:149] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1beta3.FlowSchema: failed to list *v1beta3.FlowSchema: Get "http://127.1.2.3:12345/apis/flowcontrol.apiserver.k8s.io/v1beta3/flowschemas?limit=500&resourceVersion=0": dial tcp 127.1.2.3:12345: connect: connection refused
W0526 03:30:30.006186   92324 reflector.go:538] k8s.io/client-go/informers/factory.go:150: failed to list *v1beta3.PriorityLevelConfiguration: Get "http://127.1.2.3:12345/apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 127.1.2.3:12345: connect: connection refused
E0526 03:30:30.006230   92324 reflector.go:149] k8s.io/client-go/informers/factory.go:150: Failed to watch *v1beta3.PriorityLevelConfiguration: failed to list *v1beta3.PriorityLevelConfiguration: Get "http://127.1.2.3:12345/apis/flowcontrol.apiserver.k8s.io/v1beta3/prioritylevelconfigurations?limit=500&resourceVersion=0": dial tcp 127.1.2.3:12345: connect: connection refused
I0526 03:30:30.512547   92324 handler.go:232] Adding GroupVersion mygroup.example.com v1beta1 to ResourceManager
I0526 03:30:30.512594   92324 handler.go:232] Adding GroupVersion mygroup.example.com v1 to ResourceManager

				from junit_20230526-025648.xml

Filter through log files | View test history on testgrid


Show 5026 Passed Tests

Show 58 Skipped Tests

Error lines from build-log.txt

... skipping 49 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 167: bogus-expected-to-fail: command not found
!!! [0526 02:44:42] Call tree:
!!! [0526 02:44:42]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0526 02:44:42]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0526 02:44:42]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:143 juLog(...)
!!! [0526 02:44:42]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:171 record_command(...)
!!! [0526 02:44:42]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0526 02:44:42] Running kubeadm tests
go: downloading go.uber.org/automaxprocs v1.5.2
+++ [0526 02:44:47] Setting GOMAXPROCS: 6
+++ [0526 02:44:47] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kubeadm (static)
+++ [0526 02:45:40] Setting GOMAXPROCS: 6
... skipping 225 lines ...
I0526 02:48:02.588304   19396 aggregator.go:150] waiting for initial CRD sync...
I0526 02:48:02.588884   19396 crdregistration_controller.go:111] Starting crd-autoregister controller
I0526 02:48:02.588959   19396 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
I0526 02:48:02.596989   19396 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0526 02:48:02.597031   19396 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0526 02:48:02.597097   19396 apf_controller.go:361] Starting API Priority and Fairness config controller
E0526 02:48:02.673771   19396 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
I0526 02:48:02.684981   19396 shared_informer.go:318] Caches are synced for configmaps
I0526 02:48:02.686457   19396 controller.go:624] quota admission added evaluator for: namespaces
I0526 02:48:02.687240   19396 cache.go:39] Caches are synced for AvailableConditionController controller
I0526 02:48:02.687277   19396 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
I0526 02:48:02.689038   19396 shared_informer.go:318] Caches are synced for crd-autoregister
I0526 02:48:02.689084   19396 aggregator.go:152] initial CRD sync complete...
... skipping 19 lines ...
+++ [0526 02:48:04] Setting GOMAXPROCS: 6
+++ [0526 02:48:05] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kube-controller-manager (static)
+++ [0526 02:48:41] Generate kubeconfig for controller-manager
+++ [0526 02:48:41] Starting controller-manager
I0526 02:48:42.532336   22225 serving.go:348] Generated self-signed cert in-memory
W0526 02:48:43.093386   22225 authentication.go:446] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0526 02:48:43.093420   22225 authentication.go:339] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0526 02:48:43.093431   22225 authentication.go:363] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0526 02:48:43.093444   22225 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0526 02:48:43.093455   22225 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0526 02:48:43.093860   22225 controllermanager.go:187] "Starting" version="v1.28.0-alpha.0.1224+cc16c32bf7db07"
I0526 02:48:43.093883   22225 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0526 02:48:43.095869   22225 secure_serving.go:210] Serving securely on [::]:10257
I0526 02:48:43.096010   22225 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0526 02:48:43.096200   22225 leaderelection.go:250] attempting to acquire leader lease kube-system/kube-controller-manager...
... skipping 88 lines ...
I0526 02:48:43.123118   22225 controllermanager.go:638] "Started controller" controller="disruption"
I0526 02:48:43.123131   22225 controllermanager.go:603] "Warning: controller is disabled" controller="tokencleaner"
I0526 02:48:43.123341   22225 disruption.go:423] Sending events to api server.
I0526 02:48:43.123400   22225 disruption.go:434] Starting disruption controller
I0526 02:48:43.123410   22225 shared_informer.go:311] Waiting for caches to sync for disruption
W0526 02:48:43.123555   22225 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
E0526 02:48:43.123726   22225 core.go:92] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail"
I0526 02:48:43.123758   22225 controllermanager.go:616] "Warning: skipping controller" controller="service"
W0526 02:48:43.124004   22225 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0526 02:48:43.124087   22225 controllermanager.go:638] "Started controller" controller="root-ca-cert-publisher"
I0526 02:48:43.124166   22225 publisher.go:101] Starting root CA certificate configmap publisher
I0526 02:48:43.124219   22225 shared_informer.go:311] Waiting for caches to sync for crt configmap
I0526 02:48:43.127235   22225 controllermanager.go:638] "Started controller" controller="namespace"
... skipping 56 lines ...
I0526 02:48:43.137455   22225 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::hack/testdata/ca/ca.crt::hack/testdata/ca/ca.key"
I0526 02:48:43.137712   22225 node_lifecycle_controller.go:431] "Controller will reconcile labels"
I0526 02:48:43.137776   22225 controllermanager.go:638] "Started controller" controller="nodelifecycle"
I0526 02:48:43.137878   22225 node_lifecycle_controller.go:465] "Sending events to api server"
I0526 02:48:43.137906   22225 node_lifecycle_controller.go:476] "Starting node controller"
I0526 02:48:43.137912   22225 shared_informer.go:311] Waiting for caches to sync for taint
E0526 02:48:43.138006   22225 core.go:213] "Failed to start cloud node lifecycle controller" err="no cloud provider provided"
I0526 02:48:43.138025   22225 controllermanager.go:616] "Warning: skipping controller" controller="cloud-node-lifecycle"
I0526 02:48:43.138424   22225 controllermanager.go:638] "Started controller" controller="persistentvolume-expander"
I0526 02:48:43.138544   22225 expand_controller.go:343] "Starting expand controller"
I0526 02:48:43.138562   22225 shared_informer.go:311] Waiting for caches to sync for expand
I0526 02:48:43.138734   22225 controllermanager.go:638] "Started controller" controller="ephemeral-volume"
I0526 02:48:43.138856   22225 controller.go:169] "Starting ephemeral volume controller"
... skipping 50 lines ...
I0526 02:48:43.338428   22225 taint_manager.go:211] "Sending events to api server"
I0526 02:48:43.417354   22225 shared_informer.go:318] Caches are synced for stateful set
I0526 02:48:43.423747   22225 shared_informer.go:318] Caches are synced for disruption
I0526 02:48:43.532269   22225 shared_informer.go:318] Caches are synced for resource quota
I0526 02:48:43.544674   22225 shared_informer.go:318] Caches are synced for resource quota
node/127.0.0.1 created
I0526 02:48:43.815602   22225 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"127.0.0.1\" does not exist"
+++ [0526 02:48:43] Checking kubectl version
I0526 02:48:43.863205   22225 shared_informer.go:318] Caches are synced for garbage collector
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"28+", GitVersion:"v1.28.0-alpha.0.1224+cc16c32bf7db07", GitCommit:"cc16c32bf7db07e0ad31e33af44ffebceefca783", GitTreeState:"clean", BuildDate:"2023-05-25T23:39:03Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"28+", GitVersion:"v1.28.0-alpha.0.1224+cc16c32bf7db07", GitCommit:"cc16c32bf7db07e0ad31e33af44ffebceefca783", GitTreeState:"clean", BuildDate:"2023-05-25T23:39:03Z", GoVersion:"go1.20.4", Compiler:"gc", Platform:"linux/amd64"}
I0526 02:48:43.922423   22225 shared_informer.go:318] Caches are synced for garbage collector
I0526 02:48:43.922471   22225 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   40s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 196 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0526 02:48:49] Creating namespace namespace-1685069329-2753
namespace/namespace-1685069329-2753 created
Context "test" modified.
+++ [0526 02:48:49] Testing RESTMapper
+++ [0526 02:48:49] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 61 lines ...
namespace/namespace-1685069331-18224 created
Context "test" modified.
+++ [0526 02:48:51] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 152 lines ...
namespace/namespace-1685069337-9181 created
Context "test" modified.
+++ [0526 02:48:57] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 625 lines ...
has:valid-pod
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:214: Successful get pods -lname=valid-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name "test-kubectl-describe-pod" }}found{{end}}{{end}}:: :
... skipping 30 lines ...
I0526 02:49:15.086612   27369 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 1 milliseconds
I0526 02:49:15.088076   27369 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.uid%3D25b880e9-8467-4c03-850b-765103427db8%2CinvolvedObject.name%3Dtest-pdb-2%2CinvolvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget&limit=500 200 OK in 1 milliseconds
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 242 lines ...
core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:3.9:
(BSuccessful
(Bmessage:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0526 02:49:30] "kubectl patch with resourceVersion 617" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
(Bmessage:kubectl-replace
has:kubectl-replace
Successful
(Bmessage:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
(Bmessage:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
I0526 02:49:31.444242   22225 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"node-v1-test\" does not exist"
core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
core.sh:655: Successful get node node-v1-test {{.metadata.annotations.a}}: b
... skipping 29 lines ...
spec:
  containers:
  - image: registry.k8s.io/pause:3.9
    name: kubernetes-pause
has:localonlyvalue
core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 84 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0526 02:49:41] Creating namespace namespace-1685069381-28069
namespace/namespace-1685069381-28069 created
Context "test" modified.
+++ [0526 02:49:41] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 63 lines ...
	If true, keep the managedFields when printing objects in JSON or YAML format.

    --template='':
	Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].

    --validate='strict':
	Must be one of: strict (or true), warn, ignore (or false). 		"true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not. 		"warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise. 		"false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields.

    --windows-line-endings=false:
	Only relevant if --edit=true. Defaults to the line ending native to your platform.

Usage:
  kubectl create -f FILENAME [options]
... skipping 38 lines ...
I0526 02:49:44.162336   22225 event.go:307] "Event occurred" object="namespace-1685069381-13948/test-deployment-retainkeys-d65c44c97" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-d65c44c97-tq6gp"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
(Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/test-pod created (dry run)
pod/test-pod created (server dry run)
apply.sh:107: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 31 lines ...
(Bpod/b created
apply.sh:207: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:208: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
(Bmessage:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
pod/a created
pod/b created
I0526 02:49:53.615780   19396 alloc.go:330] "allocated clusterIPs" service="namespace-1685069381-13948/prune-svc" clusterIPs={"IPv4":"10.0.0.69"}
service/prune-svc created
W0526 02:49:53.616329   31456 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.
... skipping 44 lines ...
(Bpod/b unchanged
W0526 02:50:11.492934   31832 prune.go:71] Deprecated: kubectl apply will no longer prune non-namespaced resources by default when used with the --namespace flag in a future release. To preserve the current behaviour, list the resources you want to target explicitly in the --prune-allowlist flag.
pod/a pruned
apply.sh:265: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b:
(Bnamespace "nsb" deleted
Successful
(Bmessage:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:276: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:280: Successful get services a {{.metadata.name}}: a
(BSuccessful
(Bmessage:The Service "a" is invalid: spec.clusterIPs[0]: Invalid value: []string{"10.0.0.12"}: may not change once set
... skipping 28 lines ...
(Bapply.sh:302: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:303: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
(Bmessage:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:311: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
(Bmessage:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:319: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
I0526 02:50:23.127247   22225 namespace_controller.go:182] "Namespace has been deleted" namespace="nsb"
apply.sh:325: Successful get configmaps --field-selector=metadata.name=foo {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:configmap/foo created
error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
ensure CRDs are installed first
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:331: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:337: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
... skipping 7 lines ...
pod "pod-c" deleted
apply.sh:345: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:349: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0526 02:50:27.934035   19396 handler.go:232] Adding GroupVersion example.com v1 to ResourceManager
Successful
(Bmessage:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
ensure CRDs are installed first
has:no matches for kind "Widget" in version "example.com/v1"
customresourcedefinition.apiextensions.k8s.io/widgets.example.com condition met
Successful
(Bmessage:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:356: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI0526 02:50:30.477236   19396 controller.go:624] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
apply.sh:359: Successful get widget foo {{.metadata.name}}: foo
... skipping 34 lines ...
(Bmessage:893
has:893
pod "test-pod" deleted
apply.sh:415: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ [0526 02:50:32] Testing upgrade kubectl client-side apply to server-side apply
pod/test-pod created
error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
... skipping 153 lines ...
(Bpod "nginx-extensions" deleted
Successful
(Bmessage:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
(Bmessage:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0526 02:50:39] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
(Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 29 lines ...
I0526 02:50:41.576082   22225 event.go:307] "Event occurred" object="namespace-1685069439-15185/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-5645b79496 to 3"
I0526 02:50:41.579826   22225 event.go:307] "Event occurred" object="namespace-1685069439-15185/nginx-5645b79496" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5645b79496-bn5xh"
I0526 02:50:41.583467   22225 event.go:307] "Event occurred" object="namespace-1685069439-15185/nginx-5645b79496" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5645b79496-hfw8t"
I0526 02:50:41.583536   22225 event.go:307] "Event occurred" object="namespace-1685069439-15185/nginx-5645b79496" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5645b79496-hq2xp"
apps.sh:183: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
(Bmessage:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1685069439-15185\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"registry.k8s.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1685069439-15185"
for: "hack/testdata/deployment-label-change2.yaml": error when patching "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I0526 02:50:50.135363   22225 event.go:307] "Event occurred" object="namespace-1685069439-15185/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-5675dfc785 to 3"
I0526 02:50:50.153747   22225 event.go:307] "Event occurred" object="namespace-1685069439-15185/nginx-5675dfc785" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5675dfc785-26ttz"
I0526 02:50:50.166833   22225 event.go:307] "Event occurred" object="namespace-1685069439-15185/nginx-5675dfc785" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5675dfc785-xqnfr"
I0526 02:50:50.167490   22225 event.go:307] "Event occurred" object="namespace-1685069439-15185/nginx-5675dfc785" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-5675dfc785-v4k9l"
Successful
... skipping 538 lines ...
+++ [0526 02:51:14] Creating namespace namespace-1685069474-320
namespace/namespace-1685069474-320 created
Context "test" modified.
+++ [0526 02:51:14] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:{
    "apiVersion": "v1",
    "items": [],
... skipping 21 lines ...
has not:No resources found
Successful
(Bmessage:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
(Bmessage:No resources found in namespace-1685069474-320 namespace.
has:No resources found
Successful
(Bmessage:
has not:No resources found
Successful
(Bmessage:No resources found in namespace-1685069474-320 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
(Bmessage:Error from server (NotFound): pods "abc" not found
has not:List
Successful
(Bmessage:I0526 02:51:16.143342   35085 loader.go:395] Config loaded from file:  /tmp/tmp.RJoDPro5JA/.kube/config
I0526 02:51:16.147845   35085 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
I0526 02:51:16.162951   35085 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0526 02:51:16.164441   35085 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 597 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
(Bmessage:valid-pod:
has:valid-pod:
Successful
(Bmessage:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2023-05-26T02:51:23Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2023-05-26T02:51:23Z"}}, "name":"valid-pod", "namespace":"namespace-1685069483-2185", "resourceVersion":"1127", "uid":"62da914e-d7b3-4d5e-b4d9-0aa193e97865"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"registry.k8s.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "preemptionPolicy":"PreemptLowerPriority", "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
(Bmessage:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2023-05-26T02:51:23Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2023-05-26T02:51:23Z"}],"name":"valid-pod","namespace":"namespace-1685069483-2185","resourceVersion":"1127","uid":"62da914e-d7b3-4d5e-b4d9-0aa193e97865"},"spec":{"containers":[{"image":"registry.k8s.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2023-05-26T02:51:23Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2023-05-26T02:51:23Z]] name:valid-pod namespace:namespace-1685069483-2185 resourceVersion:1127 uid:62da914e-d7b3-4d5e-b4d9-0aa193e97865] spec:map[containers:[map[image:registry.k8s.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true preemptionPolicy:PreemptLowerPriority priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
(Bmessage:Error from server (NotFound): the server could not find the requested resource
has:the server could not find the requested resource
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:STATUS
Successful
... skipping 78 lines ...
  terminationGracePeriodSeconds: 30
status:
  phase: Pending
  qosClass: Guaranteed
has:name: valid-pod
Successful
(Bmessage:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:204: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 1140 lines ...
+++ [0526 02:51:37] Creating namespace namespace-1685069497-26228
namespace/namespace-1685069497-26228 created
Context "test" modified.
+++ [0526 02:51:37] Testing kubectl exec POD COMMAND
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
(Bmessage:error: cannot exec into multiple objects at a time
has:cannot exec into multiple objects at a time
pod/test-pod created
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [0526 02:51:38] Creating namespace namespace-1685069498-2481
namespace/namespace-1685069498-2481 created
Context "test" modified.
+++ [0526 02:51:38] Testing kubectl exec TYPE/NAME COMMAND
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0526 02:51:38.999575   22225 event.go:307] "Event occurred" object="namespace-1685069498-2481/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-pr7sg"
I0526 02:51:39.003609   22225 event.go:307] "Event occurred" object="namespace-1685069498-2481/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-h2spw"
I0526 02:51:39.003638   22225 event.go:307] "Event occurred" object="namespace-1685069498-2481/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-d7mr6"
configmap/test-set-env-config created
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-pr7sg does not have a host assigned
has not:not found
Successful
(Bmessage:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-pr7sg does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
(Bmessage:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
(Bmessage:user-specified
has:user-specified
Successful
(Bmessage:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
(B{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"05e5cd7e-6e50-484a-a3fc-9a6ca261994d","resourceVersion":"1227","creationTimestamp":"2023-05-26T02:51:40Z"}}
Successful
(Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"05e5cd7e-6e50-484a-a3fc-9a6ca261994d","resourceVersion":"1228","creationTimestamp":"2023-05-26T02:51:40Z"},"data":{"key1":"config1"}}
has:uid
Successful
(Bmessage:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"05e5cd7e-6e50-484a-a3fc-9a6ca261994d","resourceVersion":"1228","creationTimestamp":"2023-05-26T02:51:40Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"05e5cd7e-6e50-484a-a3fc-9a6ca261994d"}}
Successful
(Bmessage:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 25 lines ...
+++ command: run_kubectl_create_validate_tests
+++ [0526 02:51:41] Creating namespace namespace-1685069501-5708
namespace/namespace-1685069501-5708 created
Context "test" modified.
+++ [0526 02:51:41] Testing kubectl create --validate
Successful
message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo"
has either:strict decoding error
or:error validating data
+++ [0526 02:51:41] Testing kubectl create --validate=true
Successful
message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo"
has either:strict decoding error
or:error validating data
+++ [0526 02:51:41] Testing kubectl create --validate=false
I0526 02:51:41.906493   22225 event.go:307] "Event occurred" object="namespace-1685069501-5708/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-cbdccf466 to 4"
Successful
(Bmessage:deployment.apps/invalid-nginx-deployment created
has:deployment.apps/invalid-nginx-deployment created
I0526 02:51:41.911664   22225 event.go:307] "Event occurred" object="namespace-1685069501-5708/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-zsw42"
I0526 02:51:41.915301   22225 event.go:307] "Event occurred" object="namespace-1685069501-5708/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-l44p9"
I0526 02:51:41.915331   22225 event.go:307] "Event occurred" object="namespace-1685069501-5708/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-8jg8k"
I0526 02:51:41.920714   22225 event.go:307] "Event occurred" object="namespace-1685069501-5708/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-7rqnq"
deployment.apps "invalid-nginx-deployment" deleted
+++ [0526 02:51:41] Testing kubectl create --validate=strict
Successful
message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo"
has either:strict decoding error
or:error validating data
+++ [0526 02:51:42] Testing kubectl create --validate=warn
Warning: unknown field "spec.baz"
Warning: unknown field "spec.foo"
I0526 02:51:42.379737   22225 event.go:307] "Event occurred" object="namespace-1685069501-5708/invalid-nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set invalid-nginx-deployment-cbdccf466 to 4"
Successful
(Bmessage:deployment.apps/invalid-nginx-deployment created
... skipping 13 lines ...
I0526 02:51:42.527608   22225 event.go:307] "Event occurred" object="namespace-1685069501-5708/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-hdzp5"
I0526 02:51:42.532119   22225 event.go:307] "Event occurred" object="namespace-1685069501-5708/invalid-nginx-deployment-cbdccf466" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: invalid-nginx-deployment-cbdccf466-hfr6s"
I0526 02:51:42.545164   22225 namespace_controller.go:182] "Namespace has been deleted" namespace="test-events"
deployment.apps "invalid-nginx-deployment" deleted
+++ [0526 02:51:42] Testing kubectl create
Successful
message:Error from server (BadRequest): error when creating "hack/testdata/invalid-deployment-unknown-and-duplicate-fields.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.baz", unknown field "spec.foo"
has either:strict decoding error
or:error validating data
+++ [0526 02:51:42] Testing kubectl create --validate=foo
Successful
(Bmessage:error: invalid - validate option "foo"; must be one of: strict (or true), warn, ignore (or false)
has:invalid - validate option "foo"
+++ exit code: 0
Recording: run_convert_tests
Running command: run_convert_tests

+++ Running case: test-cmd.run_convert_tests 
... skipping 50 lines ...
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:apps/v1beta1
deployment.apps "nginx" deleted
Successful
(Bmessage:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
Successful
(Bmessage:nginx:
has:nginx:
+++ exit code: 0
Recording: run_kubectl_delete_allnamespaces_tests
... skipping 103 lines ...
has:Timeout
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
(Bmessage:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 157 lines ...
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:296: Successful get foos/test {{.patched}}: value2
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:298: Successful get foos/test {{.patched}}: <no value>
(B+++ [0526 02:51:51] "kubectl patch --local" returns error as expected for CustomResource: error: strategic merge patch is not supported for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 228 lines ...
(Bcrd.sh:519: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:524: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:527: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
I0526 02:52:24.721865   19396 handler.go:232] Adding GroupVersion company.com v1 to ResourceManager
I0526 02:52:24.726392   19396 handler.go:232] Adding GroupVersion company.com v1 to ResourceManager
I0526 02:52:24.735581   19396 handler.go:232] Adding GroupVersion company.com v1 to ResourceManager
I0526 02:52:24.888292   19396 handler.go:232] Adding GroupVersion company.com v1 to ResourceManager
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
... skipping 15 lines ...
+++ [0526 02:52:25] Testing recursive resources
+++ [0526 02:52:25] Creating namespace namespace-1685069545-29887
namespace/namespace-1685069545-29887 created
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0526 02:52:25.736134   19396 cacher.go:171] Terminating all watchers from cacher foos.company.com
E0526 02:52:25.737478   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 02:52:25.908250   19396 cacher.go:171] Terminating all watchers from cacher bars.company.com
E0526 02:52:25.910457   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 02:52:26.065732   19396 cacher.go:171] Terminating all watchers from cacher resources.mygroup.example.com
E0526 02:52:26.068055   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 02:52:26.236529   19396 cacher.go:171] Terminating all watchers from cacher validfoos.company.com
E0526 02:52:26.237720   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
(Bmessage:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0526 02:52:26.721681   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:26.721727   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 02:52:26.940797   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:26.940832   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
(Bmessage:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0526 02:52:27.382115   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:27.382147   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:Name:         busybox0
Namespace:    namespace-1685069545-29887
Priority:     0
Node:         <none>
Labels:       app=busybox0
... skipping 158 lines ...
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
(Bmessage:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0526 02:52:27.828199   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:27.828237   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
(Bmessage:Warning: resource pods/busybox0 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox0 configured
Warning: resource pods/busybox1 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:264: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
(Bmessage:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:273: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:278: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
(Bmessage:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:283: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
W0526 02:52:28.597143   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:28.597185   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:288: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
(Bmessage:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:293: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:297: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:302: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0526 02:52:29.022435   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:29.022479   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/busybox0 created
I0526 02:52:29.191983   22225 event.go:307] "Event occurred" object="namespace-1685069545-29887/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-ndcw8"
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0526 02:52:29.228114   22225 event.go:307] "Event occurred" object="namespace-1685069545-29887/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-4pqgl"
generic-resources.sh:306: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:311: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:312: Successful get rc busybox0 {{.spec.replicas}}: 1
(BI0526 02:52:29.507694   22225 namespace_controller.go:182] "Namespace has been deleted" namespace="non-native-resources"
generic-resources.sh:313: Successful get rc busybox1 {{.spec.replicas}}: 1
(BW0526 02:52:29.532440   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:29.532475   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:318: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80
(Bgeneric-resources.sh:319: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80
(BSuccessful
(Bmessage:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:328: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:329: Successful get rc busybox1 {{.spec.replicas}}: 1
(BW0526 02:52:30.119111   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:30.119154   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0526 02:52:30.142597   19396 alloc.go:330] "allocated clusterIPs" service="namespace-1685069545-29887/busybox0" clusterIPs={"IPv4":"10.0.0.118"}
I0526 02:52:30.148477   19396 alloc.go:330] "allocated clusterIPs" service="namespace-1685069545-29887/busybox1" clusterIPs={"IPv4":"10.0.0.27"}
generic-resources.sh:333: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:334: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
(Bmessage:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:340: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:341: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:342: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0526 02:52:30.574755   22225 event.go:307] "Event occurred" object="namespace-1685069545-29887/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-ml9cc"
I0526 02:52:30.581350   22225 event.go:307] "Event occurred" object="namespace-1685069545-29887/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-2dgv2"
generic-resources.sh:346: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:347: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
(Bmessage:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:356: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:361: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
I0526 02:52:31.188815   22225 event.go:307] "Event occurred" object="namespace-1685069545-29887/nginx1-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx1-deployment-69c599568 to 2"
I0526 02:52:31.193512   22225 event.go:307] "Event occurred" object="namespace-1685069545-29887/nginx1-deployment-69c599568" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-69c599568-x6c97"
I0526 02:52:31.196732   22225 event.go:307] "Event occurred" object="namespace-1685069545-29887/nginx1-deployment-69c599568" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-69c599568-zz8br"
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0526 02:52:31.211707   22225 event.go:307] "Event occurred" object="namespace-1685069545-29887/nginx0-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx0-deployment-5944978c6f to 2"
I0526 02:52:31.215875   22225 event.go:307] "Event occurred" object="namespace-1685069545-29887/nginx0-deployment-5944978c6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-5944978c6f-8q8nv"
I0526 02:52:31.218789   22225 event.go:307] "Event occurred" object="namespace-1685069545-29887/nginx0-deployment-5944978c6f" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-5944978c6f-42j89"
generic-resources.sh:365: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:registry.k8s.io/nginx:1.7.9:
(Bgeneric-resources.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:registry.k8s.io/nginx:1.7.9:
(BSuccessful
(Bmessage:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:378: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
(Bmessage:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 11 lines ...
has:Waiting for deployment "nginx1-deployment" rollout to finish
Successful
(Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available...
timed out waiting for the condition
unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
W0526 02:52:34.115633   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:34.115680   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 02:52:34.775302   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:34.775345   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:Waiting for deployment "nginx1-deployment" rollout to finish: 0 of 2 updated replicas are available...
Waiting for deployment "nginx0-deployment" rollout to finish: 0 of 2 updated replicas are available...
timed out waiting for the condition
timed out waiting for the condition
unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 18 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
(Bmessage:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
(Bmessage:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"registry.k8s.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0526 02:52:35.409158   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:35.409190   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:411: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0526 02:52:36.406194   22225 event.go:307] "Event occurred" object="namespace-1685069545-29887/busybox0" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-rbpqc"
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0526 02:52:36.443300   22225 event.go:307] "Event occurred" object="namespace-1685069545-29887/busybox1" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-8mk9g"
W0526 02:52:36.494153   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:36.494181   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:415: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
(Bmessage:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:no rollbacker has been implemented for "ReplicationController"
Successful
(Bmessage:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
(Bmessage:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
+++ exit code: 0
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0526 02:52:37] Testing kubectl(v1:namespaces)
Successful
(Bmessage:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
(Bmessage:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1504: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bquery for namespaces had limit param
query for resourcequotas had limit param
query for limitranges had limit param
... skipping 132 lines ...
I0526 02:52:38.513374   41130 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1685069507-8697/resourcequotas?limit=500 200 OK in 1 milliseconds
I0526 02:52:38.514481   41130 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1685069507-8697/limitranges?limit=500 200 OK in 0 milliseconds
I0526 02:52:38.516045   41130 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1685069545-29887 200 OK in 1 milliseconds
I0526 02:52:38.517197   41130 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1685069545-29887/resourcequotas?limit=500 200 OK in 1 milliseconds
I0526 02:52:38.518324   41130 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1685069545-29887/limitranges?limit=500 200 OK in 1 milliseconds
(Bnamespace "my-namespace" deleted
W0526 02:52:43.547728   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:43.547762   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0526 02:52:43.676711   22225 shared_informer.go:311] Waiting for caches to sync for resource quota
I0526 02:52:43.676753   22225 shared_informer.go:318] Caches are synced for resource quota
I0526 02:52:43.896242   22225 shared_informer.go:311] Waiting for caches to sync for garbage collector
I0526 02:52:43.896289   22225 shared_informer.go:318] Caches are synced for garbage collector
namespace/my-namespace condition met
Successful
(Bmessage:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
W0526 02:52:44.076039   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:44.076080   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1515: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1685069325-20240" deleted
... skipping 32 lines ...
namespace "namespace-1685069503-16324" deleted
namespace "namespace-1685069503-18659" deleted
namespace "namespace-1685069504-9492" deleted
namespace "namespace-1685069506-20553" deleted
namespace "namespace-1685069507-8697" deleted
namespace "namespace-1685069545-29887" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:Warning: deleting cluster-scoped resources
Successful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1685069325-20240" deleted
... skipping 32 lines ...
namespace "namespace-1685069503-16324" deleted
namespace "namespace-1685069503-18659" deleted
namespace "namespace-1685069504-9492" deleted
namespace "namespace-1685069506-20553" deleted
namespace "namespace-1685069507-8697" deleted
namespace "namespace-1685069545-29887" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1522: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1523: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name "test-quota" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
I0526 02:52:44.586259   22225 horizontal.go:512] "Horizontal Pod Autoscaler has been deleted" HPA="namespace-1685069545-29887/busybox0"
... skipping 6 lines ...
query for resourcequotas had user-specified limit param
Successful describe resourcequotas verbose logs:
I0526 02:52:44.864219   41332 loader.go:395] Config loaded from file:  /tmp/tmp.RJoDPro5JA/.kube/config
I0526 02:52:44.868712   41332 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
I0526 02:52:44.874999   41332 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/quotas/resourcequotas?limit=500 200 OK in 1 milliseconds
I0526 02:52:44.877186   41332 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/quotas/resourcequotas/test-quota 200 OK in 1 milliseconds
(BW0526 02:52:44.986958   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:44.986995   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0526 02:52:45.007724   22225 resource_quota_controller.go:337] "Resource quota has been deleted" key="quotas/test-quota"
resourcequota "test-quota" deleted
namespace "quotas" deleted
W0526 02:52:46.595824   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:46.595861   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1544: Successful get namespaces {{range.items}}{{ if eq .metadata.name "other" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1548: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1552: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1556: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1558: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
(Bmessage:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1565: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1569: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 126 lines ...
(Bcore.sh:920: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:921: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:930: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
W0526 02:52:59.631087   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:52:59.631137   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0526 02:53:01.194675   22225 namespace_controller.go:182] "Namespace has been deleted" namespace="other"
W0526 02:53:01.857437   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:53:01.857477   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 02:53:04.182960   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:53:04.183000   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 14 lines ...
configmap/test-configmap created (server dry run)
core.sh:46: Successful get configmaps {{range.items}}{{ if eq .metadata.name "test-configmap" }}found{{end}}{{end}}:: :
(Bconfigmap/test-configmap created
configmap/test-binary-configmap created
core.sh:51: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
(Bcore.sh:52: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
(BW0526 02:53:05.838343   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:53:05.838382   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
query for configmaps had limit param
query for events had limit param
query for configmaps had user-specified limit param
Successful describe configmaps verbose logs:
I0526 02:53:05.900074   42503 loader.go:395] Config loaded from file:  /tmp/tmp.RJoDPro5JA/.kube/config
I0526 02:53:05.905115   42503 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
... skipping 17 lines ...
+++ command: run_client_config_tests
+++ [0526 02:53:11] Creating namespace namespace-1685069591-5236
namespace/namespace-1685069591-5236 created
Context "test" modified.
+++ [0526 02:53:11] Testing client config
Successful
(Bmessage:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
(Bmessage:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
(Bmessage:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
(Bmessage:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
(Bmessage:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
(Bmessage:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
(Bmessage:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "vendor/k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
(Bmessage:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 57 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 56 lines ...
                  job-name=test-job
Annotations:      cronjob.kubernetes.io/instantiate: manual
Parallelism:      1
Completions:      1
Completion Mode:  NonIndexed
Start Time:       Fri, 26 May 2023 02:53:19 +0000
Pods Statuses:    1 Active (0 Ready) / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  batch.kubernetes.io/controller-uid=79aea5e1-3b15-4672-8a2c-25ce78b2c906
           batch.kubernetes.io/job-name=test-job
           controller-uid=79aea5e1-3b15-4672-8a2c-25ce78b2c906
           job-name=test-job
  Containers:
... skipping 94 lines ...
I0526 02:53:26.790505   43808 loader.go:395] Config loaded from file:  /tmp/tmp.RJoDPro5JA/.kube/config
I0526 02:53:26.794933   43808 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
I0526 02:53:26.801154   43808 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1685069606-31498/podtemplates?limit=500 200 OK in 1 milliseconds
I0526 02:53:26.803367   43808 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1685069606-31498/podtemplates/nginx 200 OK in 1 milliseconds
I0526 02:53:26.804739   43808 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1685069606-31498/events?fieldSelector=involvedObject.name%3Dnginx%2CinvolvedObject.namespace%3Dnamespace-1685069606-31498%2CinvolvedObject.kind%3DPodTemplate%2CinvolvedObject.uid%3D98c0d53c-8193-4662-92b8-2066ba2279b6&limit=500 200 OK in 1 milliseconds
(Bpodtemplate "nginx" deleted
W0526 02:53:26.998832   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:53:26.998868   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1649: Successful get podtemplate {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ exit code: 0
Recording: run_service_tests
Running command: run_service_tests

+++ Running case: test-cmd.run_service_tests 
... skipping 360 lines ...
  type: ClusterIP
status:
  loadBalancer: {}
Successful
(Bmessage:kubectl-create kubectl-set
has:kubectl-set
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1034: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
(Bmessage:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:1047: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:1054: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1058: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BI0526 02:53:29.764349   19396 alloc.go:330] "allocated clusterIPs" service="default/redis-master" clusterIPs={"IPv4":"10.0.0.239"}
... skipping 74 lines ...
daemonset.apps/bind created
I0526 02:53:33.517909   19396 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
apps.sh:34: Successful get daemonsets bind {{.metadata.generation}}: 1
(Bdaemonset.apps/bind configured
apps.sh:37: Successful get daemonsets bind {{.metadata.generation}}: 1
(Bdaemonset.apps/bind image updated
W0526 02:53:33.988512   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:53:33.988546   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:40: Successful get daemonsets bind {{.metadata.generation}}: 2
(Bdaemonset.apps/bind env updated
apps.sh:42: Successful get daemonsets bind {{.metadata.generation}}: 3
(Bdaemonset.apps/bind resource requirements updated
apps.sh:44: Successful get daemonsets bind {{.metadata.generation}}: 4
(BSuccessful
... skipping 219 lines ...
(Bmessage:daemonset.apps/bind 
REVISION  CHANGE-CAUSE
2         kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
3         kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
has:3         kubectl apply
Successful
(Bmessage:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:122: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:2.0:
(Bapps.sh:123: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
apps.sh:126: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/pause:latest:
(Bapps.sh:127: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
... skipping 60 lines ...
Namespace:    namespace-1685069617-12224
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1685069617-12224
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1685069617-12224
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1685069617-12224
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1685069617-12224
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1685069617-12224
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1685069617-12224
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1685069617-12224
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 25 lines ...
(Bcore.sh:1240: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0526 02:53:39.483721   22225 replica_set.go:232] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1685069617-12224  5a3f5339-360f-4a1c-8034-d521228afe3f 2266 2 2023-05-26 02:53:38 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] [] [{kubectl Update v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kube-controller-manager Update v1 2023-05-26 02:53:38 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kubectl-create Update v1 2023-05-26 02:53:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] [] []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}] []} [] [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00348d968 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil <nil> [] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0526 02:53:39.489893   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-bgz7r"
core.sh:1244: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1248: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1252: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1256: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0526 02:53:39.910333   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/frontend" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-hdrkq"
core.sh:1260: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1264: Successful get rc frontend {{.spec.replicas}}: 3
... skipping 62 lines ...
I0526 02:53:42.311197   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-bdljk"
I0526 02:53:42.314328   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-knkww"
I0526 02:53:42.315730   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-7gjdl"
deployment.apps/nginx-deployment scaled
I0526 02:53:42.389938   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-7df65dc9f4 to 2 from 3"
I0526 02:53:42.396004   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-7df65dc9f4-bdljk"
W0526 02:53:42.405472   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:53:42.405506   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1321: Successful get deployment nginx-deployment {{.spec.replicas}}: 2
(Bdeployment.apps "nginx-deployment" deleted
I0526 02:53:42.597539   19396 alloc.go:330] "allocated clusterIPs" service="namespace-1685069617-12224/expose-test-deployment" clusterIPs={"IPv4":"10.0.0.203"}
Successful
(Bmessage:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
W0526 02:53:42.626365   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:53:42.626406   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service "expose-test-deployment" deleted
Successful
(Bmessage:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0526 02:53:42.928476   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-7df65dc9f4 to 3"
I0526 02:53:42.932442   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-ch94l"
I0526 02:53:42.936791   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-nv5rh"
I0526 02:53:42.936819   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-7df65dc9f4" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7df65dc9f4-k4dp9"
... skipping 24 lines ...
(Bpod "valid-pod" deleted
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
Successful
(Bmessage:error: cannot expose a Node
has:cannot expose
Successful
(Bmessage:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
I0526 02:53:44.662106   19396 alloc.go:330] "allocated clusterIPs" service="namespace-1685069617-12224/kubernetes-serve-hostname-testing-sixty-three-characters-in-len" clusterIPs={"IPv4":"10.0.0.216"}
Successful
... skipping 32 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1436: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1440: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1449: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 24 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0526 02:53:47.299581   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-5f79767bf9 to 3"
I0526 02:53:47.303363   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-resources-5f79767bf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f79767bf9-kslxd"
I0526 02:53:47.307455   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-resources-5f79767bf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f79767bf9-thn4r"
I0526 02:53:47.307535   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-resources-5f79767bf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f79767bf9-xhv2k"
core.sh:1455: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1456: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bcore.sh:1457: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0526 02:53:47.594497   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-77d775b4f9 to 1"
I0526 02:53:47.598447   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-resources-77d775b4f9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-77d775b4f9-mgbfd"
core.sh:1460: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1461: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0526 02:53:47.904220   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-resources-5f79767bf9 to 2 from 3"
I0526 02:53:47.910518   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-resources-5f79767bf9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-resources-5f79767bf9-kslxd"
I0526 02:53:47.913910   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-resources" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-688f8b78b5 to 1 from 0"
I0526 02:53:47.917955   22225 event.go:307] "Event occurred" object="namespace-1685069617-12224/nginx-deployment-resources-688f8b78b5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-688f8b78b5-9sc54"
core.sh:1466: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 155 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1477: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1478: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1479: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 46 lines ...
                pod-template-hash=859689d794
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=859689d794
  Containers:
   nginx:
    Image:        registry.k8s.io/nginx:test-cmd
... skipping 123 lines ...
apps.sh:340: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(B    Image:	registry.k8s.io/nginx:test-cmd
deployment.apps/nginx rolled back (server dry run)
apps.sh:344: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
apps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
apps.sh:355: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0526 02:53:57.081574   22225 event.go:307] "Event occurred" object="namespace-1685069628-26296/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-6b9cd9ccf6 to 0 from 1"
I0526 02:53:57.087366   22225 event.go:307] "Event occurred" object="namespace-1685069628-26296/nginx-6b9cd9ccf6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-6b9cd9ccf6-4bsg9"
I0526 02:53:57.090568   22225 event.go:307] "Event occurred" object="namespace-1685069628-26296/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-6b4df94cd9 to 1 from 0"
I0526 02:53:57.093973   22225 event.go:307] "Event occurred" object="namespace-1685069628-26296/nginx-6b4df94cd9" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6b4df94cd9-6mrsn"
Successful
... skipping 80 lines ...
(Bapps.sh:399: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0526 02:53:59.490518   22225 event.go:307] "Event occurred" object="namespace-1685069628-26296/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-6444b54576 to 1"
I0526 02:53:59.494300   22225 event.go:307] "Event occurred" object="namespace-1685069628-26296/nginx-deployment-6444b54576" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-6444b54576-2wc4f"
apps.sh:402: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bapps.sh:403: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:408: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bapps.sh:409: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:412: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:1.7.9:
(Bapps.sh:413: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
... skipping 56 lines ...
Warning: key password transferred to PASSWORD
Warning: key username transferred to USERNAME
deployment.apps/nginx-deployment env updated
I0526 02:54:02.753289   22225 event.go:307] "Event occurred" object="namespace-1685069628-26296/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-d588bb564 to 0 from 1"
deployment.apps/nginx-deployment env updated
Successful
(Bmessage:error: standard input cannot be used for multiple arguments
has:standard input cannot be used for multiple arguments
I0526 02:54:02.905994   22225 event.go:307] "Event occurred" object="namespace-1685069628-26296/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-ffc86458c to 1"
deployment.apps "nginx-deployment" deleted
configmap "test-set-env-config" deleted
E0526 02:54:03.012287   22225 replica_set.go:556] sync "namespace-1685069628-26296/nginx-deployment-5446b4888c" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5446b4888c": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1685069628-26296/nginx-deployment-5446b4888c, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 900b2251-b660-4f31-b329-59e5798039da, UID in object meta: 
secret "test-set-env-secret" deleted
I0526 02:54:03.064927   22225 event.go:307] "Event occurred" object="namespace-1685069628-26296/nginx-deployment-ffc86458c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-ffc86458c-gs6hw"
apps.sh:474: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0526 02:54:03.312051   22225 replica_set.go:556] sync "namespace-1685069628-26296/nginx-deployment-57bf7fbc68" failed with replicasets.apps "nginx-deployment-57bf7fbc68" not found
deployment.apps/nginx-deployment created
I0526 02:54:03.326644   22225 event.go:307] "Event occurred" object="namespace-1685069628-26296/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-57bf7fbc68 to 3"
E0526 02:54:03.361216   22225 replica_set.go:556] sync "namespace-1685069628-26296/nginx-deployment-d588bb564" failed with replicasets.apps "nginx-deployment-d588bb564" not found
apps.sh:477: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment:
(BE0526 02:54:03.412087   22225 replica_set.go:556] sync "namespace-1685069628-26296/nginx-deployment-ffc86458c" failed with replicasets.apps "nginx-deployment-ffc86458c" not found
I0526 02:54:03.463386   22225 event.go:307] "Event occurred" object="namespace-1685069628-26296/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-4rxv6"
apps.sh:478: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx:test-cmd:
(Bapps.sh:479: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/perl:
(BI0526 02:54:03.563775   22225 event.go:307] "Event occurred" object="namespace-1685069628-26296/nginx-deployment-57bf7fbc68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-57bf7fbc68-jlgmd"
deployment.apps/nginx-deployment image updated
I0526 02:54:03.610494   22225 event.go:307] "Event occurred" object="namespace-1685069628-26296/nginx-deployment" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-6444b54576 to 1"
... skipping 190 lines ...
    Environment:	<none>
    Mounts:	<none>
  Volumes:	<none>
has:registry.k8s.io/perl
deployment.apps "nginx-deployment" deleted
+++ exit code: 0
E0526 02:54:04.061815   22225 replica_set.go:556] sync "namespace-1685069628-26296/nginx-deployment-57bf7fbc68" failed with replicasets.apps "nginx-deployment-57bf7fbc68" not found
Recording: run_rs_tests
Running command: run_rs_tests

+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
E0526 02:54:04.111808   22225 replica_set.go:556] sync "namespace-1685069628-26296/nginx-deployment-6444b54576" failed with replicasets.apps "nginx-deployment-6444b54576" not found
+++ [0526 02:54:04] Creating namespace namespace-1685069644-26423
namespace/namespace-1685069644-26423 created
Context "test" modified.
+++ [0526 02:54:04] Testing kubectl(v1:replicasets)
apps.sh:645: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0526 02:54:04.497156   22225 event.go:307] "Event occurred" object="namespace-1685069644-26423/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-wztww"
+++ [0526 02:54:04] Deleting rs
I0526 02:54:04.501244   22225 event.go:307] "Event occurred" object="namespace-1685069644-26423/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-4cbxd"
I0526 02:54:04.501336   22225 event.go:307] "Event occurred" object="namespace-1685069644-26423/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-nqnqk"
replicaset.apps "frontend" deleted
E0526 02:54:04.612541   22225 replica_set.go:556] sync "namespace-1685069644-26423/frontend" failed with Operation cannot be fulfilled on replicasets.apps "frontend": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1685069644-26423/frontend, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: dab051e6-671f-48b5-9785-f3fe9c5ea297, UID in object meta: 
apps.sh:651: Successful get pods -l tier=frontend {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:655: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0526 02:54:04.899621   22225 event.go:307] "Event occurred" object="namespace-1685069644-26423/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-gf2lg"
I0526 02:54:04.903224   22225 event.go:307] "Event occurred" object="namespace-1685069644-26423/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-ldqfn"
I0526 02:54:04.903252   22225 event.go:307] "Event occurred" object="namespace-1685069644-26423/frontend" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-xdv2r"
apps.sh:659: Successful get pods -l tier=frontend {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0526 02:54:04] Deleting rs
replicaset.apps "frontend" deleted
E0526 02:54:05.061460   22225 replica_set.go:556] sync "namespace-1685069644-26423/frontend" failed with replicasets.apps "frontend" not found
apps.sh:663: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:665: Successful get pods -l tier=frontend {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-gf2lg" deleted
pod "frontend-ldqfn" deleted
pod "frontend-xdv2r" deleted
apps.sh:668: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
Namespace:    namespace-1685069644-26423
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1685069644-26423
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1685069644-26423
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1685069644-26423
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1685069644-26423
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1685069644-26423
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1685069644-26423
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1685069644-26423
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 226 lines ...
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:808: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80
(BSuccessful
(Bmessage:kubectl-autoscale
has:kubectl-autoscale
horizontalpodautoscaler.autoscaling "frontend" deleted
error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 21 lines ...
(Bapps.sh:611: Successful get statefulset nginx {{.status.observedGeneration}}: 1
(Bstatefulset.apps/nginx scaled
I0526 02:54:13.300323   22225 event.go:307] "Event occurred" object="namespace-1685069652-25989/nginx" fieldPath="" kind="StatefulSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="create Pod nginx-0 in StatefulSet nginx successful"
apps.sh:615: Successful get statefulset nginx {{.spec.replicas}}: 1
(Bapps.sh:616: Successful get statefulset nginx {{.status.observedGeneration}}: 2
(Bstatefulset.apps/nginx restarted
W0526 02:54:13.621541   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:54:13.621585   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:624: Successful get statefulset nginx {{.status.observedGeneration}}: 3
(Bstatefulset.apps "nginx" deleted
I0526 02:54:13.698485   22225 stateful_set.go:458] "StatefulSet has been deleted" key="namespace-1685069652-25989/nginx"
+++ exit code: 0
Recording: run_statefulset_history_tests
Running command: run_statefulset_history_tests
... skipping 232 lines ...
(Bmessage:statefulset.apps/nginx 
REVISION  CHANGE-CAUSE
2         kubectl apply --filename=hack/testdata/rollingupdate-statefulset-rv2.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
3         kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true
has:3         kubectl apply
Successful
(Bmessage:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:570: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.7:
(Bapps.sh:571: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:574: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: registry.k8s.io/nginx-slim:0.8:
(Bapps.sh:575: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: registry.k8s.io/pause:2.0:
... skipping 87 lines ...
Name:         mock
Namespace:    namespace-1685069657-31690
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 61 lines ...
Name:         mock
Namespace:    namespace-1685069657-31690
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 61 lines ...
Name:         mock
Namespace:    namespace-1685069657-31690
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 42 lines ...
Namespace:    namespace-1685069657-31690
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1685069657-31690
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        registry.k8s.io/pause:3.9
    Port:         9949/TCP
... skipping 24 lines ...
(Breplicationcontroller/mock annotated
replicationcontroller/mock2 annotated
generic-resources.sh:159: Successful get rc mock {{.metadata.annotations.annotated}}: true
(Bgeneric-resources.sh:161: Successful get rc mock2 {{.metadata.annotations.annotated}}: true
(Breplicationcontroller "mock" deleted
replicationcontroller "mock2" deleted
W0526 02:54:24.953280   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:54:24.953322   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Testing with file hack/testdata/multi-resource-svclist.json and replace with file hack/testdata/multi-resource-svclist-modify.json
generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0526 02:54:25.319812   19396 alloc.go:330] "allocated clusterIPs" service="namespace-1685069657-31690/mock" clusterIPs={"IPv4":"10.0.0.138"}
service/mock created
I0526 02:54:25.324802   19396 alloc.go:330] "allocated clusterIPs" service="namespace-1685069657-31690/mock2" clusterIPs={"IPv4":"10.0.0.173"}
... skipping 58 lines ...
service "mock2" deleted
generic-resources.sh:173: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bgeneric-resources.sh:174: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0526 02:54:26.963768   22225 horizontal.go:512] "Horizontal Pod Autoscaler has been deleted" HPA="namespace-1685069644-26423/frontend"
I0526 02:54:27.336769   19396 alloc.go:330] "allocated clusterIPs" service="namespace-1685069657-31690/mock" clusterIPs={"IPv4":"10.0.0.152"}
service/mock created
W0526 02:54:27.363732   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:54:27.363762   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/mock created
I0526 02:54:27.376733   22225 event.go:307] "Event occurred" object="namespace-1685069657-31690/mock" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-sjg9p"
generic-resources.sh:180: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock:
(Bgeneric-resources.sh:181: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock:
(Bservice "mock" deleted
replicationcontroller "mock" deleted
... skipping 9 lines ...
+++ [0526 02:54:27] Creating namespace namespace-1685069667-7505
namespace/namespace-1685069667-7505 created
Context "test" modified.
+++ [0526 02:54:27] Testing persistent volumes
storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0526 02:54:28.258071   22225 pv_protection_controller.go:113] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(Bpersistentvolume "pv0001" deleted
persistentvolume/pv0002 created
E0526 02:54:28.724641   22225 pv_protection_controller.go:113] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
(Bpersistentvolume "pv0002" deleted
persistentvolume/pv0003 created
E0526 02:54:29.189604   22225 pv_protection_controller.go:113] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
(Bquery for persistentvolumes had limit param
query for events had limit param
query for persistentvolumes had user-specified limit param
Successful describe persistentvolumes verbose logs:
I0526 02:54:29.311181   53778 loader.go:395] Config loaded from file:  /tmp/tmp.RJoDPro5JA/.kube/config
I0526 02:54:29.316390   53778 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
I0526 02:54:29.323325   53778 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes?limit=500 200 OK in 1 milliseconds
I0526 02:54:29.325666   53778 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes/pv0003 200 OK in 1 milliseconds
I0526 02:54:29.335713   53778 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.namespace%3D%2CinvolvedObject.kind%3DPersistentVolume%2CinvolvedObject.uid%3Dc8c0c55a-510c-4ed0-83aa-78733c10a23c%2CinvolvedObject.name%3Dpv0003&limit=500 200 OK in 9 milliseconds
(Bpersistentvolume "pv0003" deleted
storage.sh:44: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0526 02:54:29.870275   22225 pv_protection_controller.go:113] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:47: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(BSuccessful
(Bmessage:Warning: deleting cluster-scoped resources, not scoped to the provided namespace
persistentvolume "pv0001" deleted
has:Warning: deleting cluster-scoped resources
Successful
... skipping 88 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 02:48:43 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 34 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 02:48:43 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 35 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 02:48:43 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 02:48:43 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 42 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 02:48:43 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 34 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 02:48:43 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 34 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 02:48:43 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 30 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 26 May 2023 02:48:43 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 26 May 2023 02:48:43 +0000   Fri, 26 May 2023 02:49:43 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 198 lines ...
yes
has:the server doesn't have a resource type
Successful
(Bmessage:yes
has:yes
Successful
(Bmessage:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
(BSuccessful
(Bmessage:yes
0
has:0
... skipping 62 lines ...
		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
legacy-script.sh:893: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
(Blegacy-script.sh:894: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
(Blegacy-script.sh:895: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
(Blegacy-script.sh:896: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
(BSuccessful
(Bmessage:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
has:only rbac.authorization.k8s.io/v1 is supported
rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
role.rbac.authorization.k8s.io "testing-R" deleted
Warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
... skipping 24 lines ...
discovery.sh:236: Successful get all -l app=cassandra {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}: cassandra:cassandra:cassandra:cassandra:
(Bpod "cassandra-6k5f7" deleted
I0526 02:54:41.350789   22225 event.go:307] "Event occurred" object="namespace-1685069680-17645/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-mp42q"
pod "cassandra-jkt4z" deleted
I0526 02:54:41.361097   22225 event.go:307] "Event occurred" object="namespace-1685069680-17645/cassandra" fieldPath="" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-w9r86"
replicationcontroller "cassandra" deleted
E0526 02:54:41.365619   22225 replica_set.go:556] sync "namespace-1685069680-17645/cassandra" failed with replicationcontrollers "cassandra" not found
service "cassandra" deleted
+++ exit code: 0
Recording: run_kubectl_explain_tests
Running command: run_kubectl_explain_tests

+++ Running case: test-cmd.run_kubectl_explain_tests 
... skipping 65 lines ...
  status	<PodStatus>
    Most recently observed status of the pod. This data may not be up to date.
    Populated by the system. Read-only. More info:
    https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status


W0526 02:54:42.015421   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:54:42.015480   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
KIND:       Pod
VERSION:    v1

FIELD: message <string>

DESCRIPTION:
... skipping 120 lines ...
volumeattachments                              storage.k8s.io/v1                      false        VolumeAttachment
I0526 02:54:45.373273   19396 handler.go:232] Adding GroupVersion test.com v1 to ResourceManager
Successful
(Bmessage:customresourcedefinition.apiextensions.k8s.io/examples.test.com created
has:created
W0526 02:54:45.969091   19396 cacher.go:171] Terminating all watchers from cacher examples.test.com
W0526 02:54:47.460963   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:54:47.460995   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:example.test.com/test created
has:created
example.test.com "test" deleted
I0526 02:54:47.688616   19396 handler.go:232] Adding GroupVersion test.com v1 to ResourceManager
customresourcedefinition.apiextensions.k8s.io "examples.test.com" deleted
... skipping 18 lines ...
(BNo resources found in namespace-1685069682-27541 namespace.
No resources found in namespace-1685069682-27541 namespace.
get.sh:314: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
get.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BW0526 02:54:48.700662   19396 cacher.go:171] Terminating all watchers from cacher examples.test.com
E0526 02:54:48.701967   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
Successful
(Bmessage:I0526 02:54:48.782985   56518 loader.go:395] Config loaded from file:  /tmp/tmp.RJoDPro5JA/.kube/config
... skipping 109 lines ...
get.sh:408: Successful get namespaces {{range.items}}{{if eq .metadata.name "default"}}{{.metadata.name}}:{{end}}{{end}}: default:
(Bget.sh:412: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
get.sh:416: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BNAMESPACE                    NAME        READY   STATUS    RESTARTS   AGE
namespace-1685069682-27541   valid-pod   0/1     Pending   0          0s
W0526 02:54:51.361186   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:54:51.361220   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/all-ns-test-1 created
serviceaccount/test created
namespace/all-ns-test-2 created
serviceaccount/test created
Successful
(Bmessage:NAMESPACE                    NAME      SECRETS   AGE
... skipping 121 lines ...
namespace-1685069670-20505   default   0         21s
namespace-1685069680-17645   default   0         11s
namespace-1685069682-27541   default   0         9s
some-other-random            default   0         12s
has:all-ns-test-2
namespace "all-ns-test-1" deleted
W0526 02:54:56.642433   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:54:56.642475   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "all-ns-test-2" deleted
W0526 02:54:59.988336   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:54:59.988375   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0526 02:55:01.871307   22225 namespace_controller.go:182] "Namespace has been deleted" namespace="all-ns-test-1"
get.sh:442: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BWarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
get.sh:446: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bget.sh:450: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1:
... skipping 19 lines ...
(Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind
No resources found in namespace-1685069682-27541 namespace.
has:example.com/v1beta1 DeprecatedKind is deprecated
Successful
(Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind
No resources found in namespace-1685069682-27541 namespace.
error: 1 warning received
has:example.com/v1beta1 DeprecatedKind is deprecated
Successful
(Bmessage:Warning: example.com/v1beta1 DeprecatedKind is deprecated; use example.com/v1 DeprecatedKind
No resources found in namespace-1685069682-27541 namespace.
error: 1 warning received
has:error: 1 warning received
I0526 02:55:02.942675   19396 handler.go:232] Adding GroupVersion example.com v1 to ResourceManager
customresourcedefinition.apiextensions.k8s.io "deprecated.example.com" deleted
I0526 02:55:02.942724   19396 handler.go:232] Adding GroupVersion example.com v1beta1 to ResourceManager
I0526 02:55:02.947233   19396 handler.go:232] Adding GroupVersion example.com v1 to ResourceManager
I0526 02:55:02.947277   19396 handler.go:232] Adding GroupVersion example.com v1beta1 to ResourceManager
+++ exit code: 0
... skipping 266 lines ...
Successful
(Bmessage:deploy:
has:deploy:
Successful
(Bmessage:Config:
has:Config
W0526 02:55:07.306603   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:55:07.306636   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:apiVersion: v1
kind: ConfigMap
metadata:
  creationTimestamp: null
  name: cm
... skipping 292 lines ...
evicting pod namespace-1685069710-14440/test-pod-1 (server dry run)
node/127.0.0.1 drained (server dry run)
node-management.sh:140: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2,
(BWarning: deleting Pods that declare no controller: namespace-1685069710-14440/test-pod-1
I0526 02:55:14.018447   22225 shared_informer.go:311] Waiting for caches to sync for garbage collector
I0526 02:55:14.018492   22225 shared_informer.go:318] Caches are synced for garbage collector
W0526 02:55:22.685028   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:55:22.685066   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 02:55:25.225636   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:55:25.225681   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 02:55:27.315905   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:55:27.315949   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 02:55:28.792839   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:55:28.792897   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 02:55:41.636572   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:55:41.636607   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:node/127.0.0.1 cordoned
evicting pod namespace-1685069710-14440/test-pod-1
pod "test-pod-1" has DeletionTimestamp older than 1 seconds, skipping
node/127.0.0.1 drained
has:evicting pod .*/test-pod-1
... skipping 14 lines ...
(Bmessage:node/127.0.0.1 already uncordoned (server dry run)
has:already uncordoned
node-management.sh:161: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode/127.0.0.1 labeled
node-management.sh:166: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(BSuccessful
(Bmessage:error: cannot specify both a node name and a --selector option
See 'kubectl drain -h' for help and examples
has:cannot specify both a node name
node-management.sh:172: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(Bnode-management.sh:174: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode-management.sh:176: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2,
(BSuccessful
... skipping 78 lines ...
Warning: deleting Pods that declare no controller: namespace-1685069710-14440/test-pod-1, namespace-1685069710-14440/test-pod-2
evicting pod namespace-1685069710-14440/test-pod-1 (dry run)
evicting pod namespace-1685069710-14440/test-pod-2 (dry run)
node/127.0.0.1 drained (dry run)
has:/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&limit=500 200 OK
Successful
(Bmessage:error: USAGE: cordon NODE [flags]
See 'kubectl cordon -h' for help and examples
has:error\: USAGE\: cordon NODE
node/127.0.0.1 already uncordoned
Successful
(Bmessage:error: You must provide one or more resources by argument or filename.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
   '<resource> <name>'
   '<resource>'
has:must provide one or more resources
... skipping 18 lines ...
+++ [0526 02:55:47] Testing kubectl plugins
Successful
(Bmessage:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/version/kubectl-version
  - warning: kubectl-version overwrites existing command: "kubectl version"
error: one plugin warning was found
has:kubectl-version overwrites existing command: "kubectl version"
Successful
(Bmessage:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
  - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
error: one plugin warning was found
has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
Successful
(Bmessage:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
has:plugins are available
Successful
(Bmessage:Unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
error: unable to find any kubectl plugins in your PATH
has:unable to find any kubectl plugins in your PATH
Successful
(Bmessage:I am plugin foo
has:plugin foo
Successful
(Bmessage:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1
... skipping 13 lines ...

+++ Running case: test-cmd.run_impersonation_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_impersonation_tests
+++ [0526 02:55:47] Testing impersonation
Successful
(Bmessage:error: requesting uid, groups or user-extra for test-admin without impersonating a user
has:without impersonating a user
Successful
(Bmessage:error: requesting uid, groups or user-extra for test-admin without impersonating a user
has:without impersonating a user
certificatesigningrequest.certificates.k8s.io/foo created
authorization.sh:60: Successful get csr/foo {{.spec.username}}: user1
(Bauthorization.sh:61: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificatesigningrequest.certificates.k8s.io/foo created
... skipping 19 lines ...
I0526 02:55:49.277298   22225 event.go:307] "Event occurred" object="namespace-1685069749-31269/test-1" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-1-7697bf65f7 to 1"
I0526 02:55:49.284097   22225 event.go:307] "Event occurred" object="namespace-1685069749-31269/test-1-7697bf65f7" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-1-7697bf65f7-wz2n5"
deployment.apps/test-2 created
I0526 02:55:49.343017   22225 event.go:307] "Event occurred" object="namespace-1685069749-31269/test-2" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-2-675f68f47d to 1"
I0526 02:55:49.346120   22225 event.go:307] "Event occurred" object="namespace-1685069749-31269/test-2-675f68f47d" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-2-675f68f47d-fcxpb"
wait.sh:36: Successful get deployments {{range .items}}{{.metadata.name}},{{end}}: test-1,test-2,
(BW0526 02:55:58.754171   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:55:58.754207   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 02:56:02.361301   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:56:02.361337   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 02:56:10.320308   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:56:10.320342   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0526 02:56:12.893523   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:56:12.893569   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
(Bmessage:error: timed out waiting for the condition on deployments/test-1
has:timed out
deployment.apps "test-1" deleted
deployment.apps "test-2" deleted
Successful
(Bmessage:deployment.apps/test-1 condition met
deployment.apps/test-2 condition met
... skipping 171 lines ...
(Bdebug.sh:369: Successful get pod -n namespace-restricted {{range.items}}{{.metadata.name}}:{{end}}: target:target-copy:
(Bdebug.sh:370: Successful get pod/target-copy -n namespace-restricted {{range.spec.containers}}{{.name}}:{{end}}: target:debug-container:
(Bdebug.sh:371: Successful get pod/target-copy -n namespace-restricted {{range.spec.containers}}{{.image}}:{{end}}: busybox:busybox:
(Bpod "target" deleted
pod "target-copy" deleted
namespace "namespace-restricted" deleted
W0526 02:56:35.831732   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:56:35.831768   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_kubectl_debug_node_tests
Running command: run_kubectl_debug_node_tests

+++ Running case: test-cmd.run_kubectl_debug_node_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 64 lines ...
debug.sh:269: Successful get pod/node-debugger-127.0.0.1-hhzs2 {{(index .spec.containers 0).image}}: busybox
(Bdebug.sh:270: Successful get pod/node-debugger-127.0.0.1-hhzs2 {{.spec.nodeName}}: 127.0.0.1
(Bdebug.sh:271: Successful get pod/node-debugger-127.0.0.1-hhzs2 {{.spec.hostIPC}}: <no value>
(Bdebug.sh:272: Successful get pod/node-debugger-127.0.0.1-hhzs2 {{.spec.hostNetwork}}: <no value>
(Bdebug.sh:273: Successful get pod/node-debugger-127.0.0.1-hhzs2 {{.spec.hostPID}}: <no value>
(Bdebug.sh:274: Successful get pod/node-debugger-127.0.0.1-hhzs2 {{if (index (index .spec.containers 0) "securityContext")}}:{{end}}: 
(BW0526 02:56:40.045106   22225 reflector.go:538] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0526 02:56:40.045144   22225 reflector.go:149] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "node-debugger-127.0.0.1-hhzs2" force deleted
+++ exit code: 0
Recording: run_kubectl_debug_restricted_node_tests
Running command: run_kubectl_debug_restricted_node_tests

... skipping 4900 lines ...
{"Time":"2023-05-26T03:12:28.603968079Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestScalePausedDeployment","Output":"}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"fake-name\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"test-scale-paused-deployment-689bc76696-sxrjc\",\"generateName\":\"test-scale-paused-deployment-689bc76696-\",\"namespace\":\"test-scale-paused-deployment\",\"uid\":\"12445cce-e948-44ba-ada1-2328372a6f41\",\"resourceVersion\":\"41046\",\"creationTimestamp\":\"2023-05-26T03:12:26Z\",\"labels\":{\"name\":\"test\",\"pod-template-hash\":\"689bc76696\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"test-scale-paused-deployment-689bc76696\",\"uid\":\"ea7363dd-a2bf-4873-aba9-393450449f13\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"deployment.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:12:26Z\",\"fieldsType\":\"Fie"}
{"Time":"2023-05-26T03:12:28.612458111Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestScalePausedDeployment","Output":"}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"fake-name\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"test-scale-paused-deployment-689bc76696-tqp59\",\"generateName\":\"test-scale-paused-deployment-689bc76696-\",\"namespace\":\"test-scale-paused-deployment\",\"uid\":\"a1ceaa24-91a2-4b11-b21c-30303efedebe\",\"resourceVersion\":\"41048\",\"creationTimestamp\":\"2023-05-26T03:12:26Z\",\"labels\":{\"name\":\"test\",\"pod-template-hash\":\"689bc76696\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"test-scale-paused-deployment-689bc76696\",\"uid\":\"ea7363dd-a2bf-4873-aba9-393450449f13\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"deployment.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:12:26Z\",\"fieldsType\":\"Fie"}
{"Time":"2023-05-26T03:12:28.613446574Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestScalePausedDeployment","Output":"}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"fake-name\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"test-scale-paused-deployment-689bc76696-w26wn\",\"generateName\":\"test-scale-paused-deployment-689bc76696-\",\"namespace\":\"test-scale-paused-deployment\",\"uid\":\"da156619-d888-4c28-96d0-25951c00886b\",\"resourceVersion\":\"41051\",\"creationTimestamp\":\"2023-05-26T03:12:26Z\",\"labels\":{\"name\":\"test\",\"pod-template-hash\":\"689bc76696\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"test-scale-paused-deployment-689bc76696\",\"uid\":\"ea7363dd-a2bf-4873-aba9-393450449f13\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"deployment.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:12:26Z\",\"fieldsType\":\"Fie"}
{"Time":"2023-05-26T03:12:33.286985869Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentHashCollision","Output":"minationMessagePolicy\\\":{}}},\\\"f:dnsPolicy\\\":{},\\\"f:enableServiceLinks\\\":{},\\\"f:restartPolicy\\\":{},\\\"f:schedulerName\\\":{},\\\"f:securityContext\\\":{},\\\"f:terminationGracePeriodSeconds\\\":{}}} }]},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{Container{Name:fake-name,Image:fakeimage,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false"}
{"Time":"2023-05-26T03:12:33.400082975Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentHashCollision","Output":"ssagePolicy\\\":{}}},\\\"f:dnsPolicy\\\":{},\\\"f:enableServiceLinks\\\":{},\\\"f:restartPolicy\\\":{},\\\"f:schedulerName\\\":{},\\\"f:securityContext\\\":{},\\\"f:terminationGracePeriodSeconds\\\":{}}} }]},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{Container{Name:fake-name,Image:fakeimage,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:f"}
{"Time":"2023-05-26T03:12:37.120039821Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestFailedDeployment","Output":"restartPolicy\\\":{},\\\"f:schedulerName\\\":{},\\\"f:securityContext\\\":{},\\\"f:terminationGracePeriodSeconds\\\":{}}} }]},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{Container{Name:fake-name,Image:fakeimage,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:\u0026PodSecurityContext{SELinuxOptions:"}
{"Time":"2023-05-26T03:12:41.025139795Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestFailedDeployment","Output":"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"progress-check-6cc86d97fd-n5d5v\",\"generateName\":\"progress-check-6cc86d97fd-\",\"namespace\":\"test-failed-deployment\",\"uid\":\"e10ce2f2-2265-4b0c-bc03-6c50da9f5314\",\"resourceVersion\":\"41756\",\"creationTimestamp\":\"2023-05-26T03:12:37Z\",\"labels\":{\"name\":\"test\",\"pod-template-hash\":\"6cc86d97fd\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"progress-check-6cc86d97fd\",\"uid\":\"5d8677a0-5064-4e89-a99a-ee462a1f2c3b\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"deployment.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:12:37Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:name\":{},\"f:pod-template-hash\":{}},\"f:own"}
{"Time":"2023-05-26T03:12:45.605669904Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestOverlappingDeployments","Output":"rviceLinks\\\":{},\\\"f:restartPolicy\\\":{},\\\"f:schedulerName\\\":{},\\\"f:securityContext\\\":{},\\\"f:terminationGracePeriodSeconds\\\":{}}} }]},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{Container{Name:fake-name,Image:fakeimage,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:\u0026PodSecurityCon"}
{"Time":"2023-05-26T03:12:45.7137565Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestOverlappingDeployments","Output":"e-name\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"first-deployment-6cc86d97fd-lz7rx\",\"generateName\":\"first-deployment-6cc86d97fd-\",\"namespace\":\"test-overlapping-deployments\",\"uid\":\"8b38c52a-cb2e-4c69-8e9d-7b172f40b34f\",\"resourceVersion\":\"42040\",\"creationTimestamp\":\"2023-05-26T03:12:45Z\",\"labels\":{\"name\":\"test\",\"pod-template-hash\":\"6cc86d97fd\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"first-deployment-6cc86d97fd\",\"uid\":\"38a69928-0a5a-47b5-a452-aff0960f9ea7\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"deployment.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:12:45Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:name\":{},"}
{"Time":"2023-05-26T03:12:45.825634443Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestOverlappingDeployments","Output":"leServiceLinks\\\":{},\\\"f:restartPolicy\\\":{},\\\"f:schedulerName\\\":{},\\\"f:securityContext\\\":{},\\\"f:terminationGracePeriodSeconds\\\":{}}} }]},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{Container{Name:fake-name,Image:fakeimage,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:\u0026PodSecurit"}
{"Time":"2023-05-26T03:12:45.933268731Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestOverlappingDeployments","Output":"\"fake-name\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"second-deployment-6cc86d97fd-4b689\",\"generateName\":\"second-deployment-6cc86d97fd-\",\"namespace\":\"test-overlapping-deployments\",\"uid\":\"8a90ff28-bfa8-4210-b473-7c54e435fcd9\",\"resourceVersion\":\"42058\",\"creationTimestamp\":\"2023-05-26T03:12:45Z\",\"labels\":{\"name\":\"test\",\"pod-template-hash\":\"6cc86d97fd\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"second-deployment-6cc86d97fd\",\"uid\":\"8f630b53-decb-47f3-95d4-289550e2a360\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"deployment.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:12:45Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:na"}
{"Time":"2023-05-26T03:12:46.047432106Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestOverlappingDeployments","Output":"rviceLinks\\\":{},\\\"f:restartPolicy\\\":{},\\\"f:schedulerName\\\":{},\\\"f:securityContext\\\":{},\\\"f:terminationGracePeriodSeconds\\\":{}}} }]},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{Container{Name:fake-name,Image:fakeimage,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:\u0026PodSecurityCon"}
{"Time":"2023-05-26T03:12:46.153432062Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestOverlappingDeployments","Output":"e-name\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"first-deployment-6cc86d97fd-ddcvp\",\"generateName\":\"first-deployment-6cc86d97fd-\",\"namespace\":\"test-overlapping-deployments\",\"uid\":\"11581330-524a-4581-b1ef-42ca594b126a\",\"resourceVersion\":\"42071\",\"creationTimestamp\":\"2023-05-26T03:12:46Z\",\"labels\":{\"name\":\"test\",\"pod-template-hash\":\"6cc86d97fd\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"first-deployment-6cc86d97fd\",\"uid\":\"38a69928-0a5a-47b5-a452-aff0960f9ea7\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"deployment.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:12:46Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:name\":{},"}
... skipping 1002 lines ...
{"Time":"2023-05-26T03:18:45.071988497Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/garbagecollector","Output":"ok  \tk8s.io/kubernetes/test/integration/garbagecollector\t152.225s\n"}
{"Time":"2023-05-26T03:18:46.092891361Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/pods","Output":"ok  \tk8s.io/kubernetes/test/integration/pods\t25.208s\n"}
{"Time":"2023-05-26T03:18:57.505919275Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/quota","Test":"TestQuota","Output":"gs rbac.authorization.k8s.io/v1, Resource=roles storage.k8s.io/v1, Resource=csistoragecapacities], removed: []\"\n"}
{"Time":"2023-05-26T03:18:59.93783691Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/pvc","Output":"ok  \tk8s.io/kubernetes/test/integration/pvc\t9.086s\n"}
{"Time":"2023-05-26T03:19:03.822226227Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/podgc","Output":"ok  \tk8s.io/kubernetes/test/integration/podgc\t47.967s\n"}
{"Time":"2023-05-26T03:19:08.725076041Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestAdoption/pod_refers_rs_as_an_owner,_not_a_controller","Output":"onds\":{}}}}]} curObjectMeta={\"name\":\"pod0\",\"namespace\":\"rs-adoption-0\",\"uid\":\"e9197de8-b805-4a05-b7b6-a08450e13e5b\",\"resourceVersion\":\"62153\",\"creationTimestamp\":\"2023-05-26T03:19:08Z\",\"labels\":{\"foo\":\"bar\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"5c9c2c7a-8b88-4d15-8a82-4ac08706dbbd\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:08Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:labels\":{\".\":{},\"f:foo\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"5c9c2c7a-8b88-4d15-8a82-4ac08706dbbd\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"fake-name\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]}\n"}
{"Time":"2023-05-26T03:19:31.265459401Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestDeletingAndFailedPods","Output":"y\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-rddql\",\"generateName\":\"rs-\",\"namespace\":\"test-deleting-and-failed-pods\",\"uid\":\"3abcae1c-a96b-447e-8298-a7f2179a47d6\",\"resourceVersion\":\"63307\",\"creationTimestamp\":\"2023-05-26T03:19:31Z\",\"labels\":{\"foo\":\"bar\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"485fdc77-de83-4cb3-9bf3-5d00271494e0\",\"controller\":true,\"blockOwnerDeletion\":true}],\"finalizers\":[\"fake.example.com/blockDeletion\"],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:31Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:finalizers\":{\".\":{},\"v:\\\"fake.example.com/blockDeletion\\\"\":{}},\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:foo\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"485fdc77-de83-4cb3-9bf3-5d00271494e0\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"fake-name\\\"}\":{\".\":{},\"f:"}
{"Time":"2023-05-26T03:19:31.271518527Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestDeletingAndFailedPods","Output":"y\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-xt4kg\",\"generateName\":\"rs-\",\"namespace\":\"test-deleting-and-failed-pods\",\"uid\":\"3b11fc89-c8df-4dc0-9f6d-6999372d59b3\",\"resourceVersion\":\"63310\",\"creationTimestamp\":\"2023-05-26T03:19:31Z\",\"labels\":{\"foo\":\"bar\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"485fdc77-de83-4cb3-9bf3-5d00271494e0\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:31Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:foo\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"485fdc77-de83-4cb3-9bf3-5d00271494e0\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"fake-name\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessageP"}
{"Time":"2023-05-26T03:19:35.010764764Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodDeletionCost/enabled-with-different-costs","Output":":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-676rz\",\"generateName\":\"rs-\",\"namespace\":\"enabled-with-different-costs\",\"uid\":\"feb00c15-783e-4331-a036-333603b05393\",\"resourceVersion\":\"63498\",\"creationTimestamp\":\"2023-05-26T03:19:34Z\",\"labels\":{\"foo\":\"bar\"},\"annotations\":{\"controller.kubernetes.io/pod-deletion-cost\":\"1000\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"fa4192ba-0789-498c-b99a-581907aaa82b\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:35Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:controller.kubernetes.io/pod-deletion-cost\":{}},\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:foo\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"fa4192ba-0789-498c-b99a-581907aaa82b\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\"}
{"Time":"2023-05-26T03:19:35.014733312Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodDeletionCost/enabled-with-different-costs","Output":",\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-676rz\",\"generateName\":\"rs-\",\"namespace\":\"enabled-with-different-costs\",\"uid\":\"feb00c15-783e-4331-a036-333603b05393\",\"resourceVersion\":\"63499\",\"creationTimestamp\":\"2023-05-26T03:19:34Z\",\"labels\":{\"foo\":\"bar\"},\"annotations\":{\"controller.kubernetes.io/pod-deletion-cost\":\"1000\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"fa4192ba-0789-498c-b99a-581907aaa82b\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:35Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:controller.kubernetes.io/pod-deletion-cost\":{}},\"f:generateName\":{},\"f:labels\":{\".\":"}
{"Time":"2023-05-26T03:19:35.018097802Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodDeletionCost/enabled-with-different-costs","Output":":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-qfqg5\",\"generateName\":\"rs-\",\"namespace\":\"enabled-with-different-costs\",\"uid\":\"e52507a5-603d-42d2-9018-80f951742b94\",\"resourceVersion\":\"63500\",\"creationTimestamp\":\"2023-05-26T03:19:34Z\",\"labels\":{\"foo\":\"bar\"},\"annotations\":{\"controller.kubernetes.io/pod-deletion-cost\":\"100\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"fa4192ba-0789-498c-b99a-581907aaa82b\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:35Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:controller.kubernetes.io/pod-deletion-cost\":{}},\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:foo\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"fa4192ba-0789-498c-b99a-581907aaa82b\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\""}
{"Time":"2023-05-26T03:19:35.021607343Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodDeletionCost/enabled-with-different-costs","Output":"\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-qfqg5\",\"generateName\":\"rs-\",\"namespace\":\"enabled-with-different-costs\",\"uid\":\"e52507a5-603d-42d2-9018-80f951742b94\",\"resourceVersion\":\"63501\",\"creationTimestamp\":\"2023-05-26T03:19:34Z\",\"labels\":{\"foo\":\"bar\"},\"annotations\":{\"controller.kubernetes.io/pod-deletion-cost\":\"100\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"fa4192ba-0789-498c-b99a-581907aaa82b\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:35Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:controller.kubernetes.io/pod-deletion-cost\":{}},\"f:generateName\":{},\"f:labels\":{\".\":{}"}
{"Time":"2023-05-26T03:19:39.01633556Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodDeletionCost/enabled-with-same-costs","Output":"bleServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-ncxz6\",\"generateName\":\"rs-\",\"namespace\":\"enabled-with-same-costs\",\"uid\":\"99a58d5a-8213-4dc0-a445-d3ec2ba5f214\",\"resourceVersion\":\"63599\",\"creationTimestamp\":\"2023-05-26T03:19:38Z\",\"labels\":{\"foo\":\"bar\"},\"annotations\":{\"controller.kubernetes.io/pod-deletion-cost\":\"100\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"848883d8-1711-4517-b85a-77d5b55f5b36\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:39Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:controller.kubernetes.io/pod-deletion-cost\":{}},\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:foo\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"848883d8-1711-4517-b85a-77d5b55f5b36\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"fake-name\\\"}\":{"}
{"Time":"2023-05-26T03:19:39.019882416Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodDeletionCost/enabled-with-same-costs","Output":"{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-ncxz6\",\"generateName\":\"rs-\",\"namespace\":\"enabled-with-same-costs\",\"uid\":\"99a58d5a-8213-4dc0-a445-d3ec2ba5f214\",\"resourceVersion\":\"63600\",\"creationTimestamp\":\"2023-05-26T03:19:38Z\",\"labels\":{\"foo\":\"bar\"},\"annotations\":{\"controller.kubernetes.io/pod-deletion-cost\":\"100\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"848883d8-1711-4517-b85a-77d5b55f5b36\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:39Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:controller.kubernetes.io/pod-deletion-cost\":{}},\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:foo\":{}},\"f"}
... skipping 3 lines ...
{"Time":"2023-05-26T03:19:43.389808076Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodDeletionCost/enabled-with-no-costs","Output":"erviceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-qf4rb\",\"generateName\":\"rs-\",\"namespace\":\"enabled-with-no-costs\",\"uid\":\"2c0abc07-90d7-4deb-ac7b-70e16adf5e90\",\"resourceVersion\":\"63709\",\"creationTimestamp\":\"2023-05-26T03:19:43Z\",\"labels\":{\"foo\":\"bar\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"2c73afca-4ada-4eb6-8060-a7992e62c158\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:43Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:foo\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"2c73afca-4ada-4eb6-8060-a7992e62c158\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"fake-name\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy"}
{"Time":"2023-05-26T03:19:43.398115259Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodDeletionCost/enabled-with-no-costs","Output":"ionSeconds:(*int64)(0xc008f208d0)}, v1.Toleration{Key:\"node.kubernetes.io/unreachable\", Operator:\"Exists\", Value:\"\", Effect:\"NoExecute\", TolerationSeconds:(*int64)(0xc008f208f0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:\"\", Priority:(*int32)(0xc008f208f8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc008f208fc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc00671df20), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil), OS:(*v1.PodOS)(nil), HostUsers:(*bool)(nil), SchedulingGates:[]v1.PodSchedulingGate(nil), ResourceClaims:[]v1.PodResourceClaim(nil)}, Status:v1.PodStatus{Phase:\"Pending\", Conditions:[]v1.PodCondition(nil), Message:\"\", Reason:\"\", NominatedNodeName:\"\", HostIP:\"\", PodIP:\"\", PodIPs:[]v1.PodIP(nil), StartTime:\u003cnil\u003e, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Na"}
{"Time":"2023-05-26T03:19:46.471906872Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodDeletionCost/disabled-with-different-costs","Output":"y\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-bpz87\",\"generateName\":\"rs-\",\"namespace\":\"disabled-with-different-costs\",\"uid\":\"37924440-3deb-4409-b04c-402666597fc3\",\"resourceVersion\":\"63838\",\"creationTimestamp\":\"2023-05-26T03:19:46Z\",\"labels\":{\"foo\":\"bar\"},\"annotations\":{\"controller.kubernetes.io/pod-deletion-cost\":\"1000\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"93cd072f-e480-413f-89aa-af5d8f53efd9\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:46Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:controller.kubernetes.io/pod-deletion-cost\":{}},\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:foo\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"93cd072f-e480-413f-89aa-af5d8f53efd9\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\"}
{"Time":"2023-05-26T03:19:46.476709592Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodDeletionCost/disabled-with-different-costs","Output":"{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-bpz87\",\"generateName\":\"rs-\",\"namespace\":\"disabled-with-different-costs\",\"uid\":\"37924440-3deb-4409-b04c-402666597fc3\",\"resourceVersion\":\"63839\",\"creationTimestamp\":\"2023-05-26T03:19:46Z\",\"labels\":{\"foo\":\"bar\"},\"annotations\":{\"controller.kubernetes.io/pod-deletion-cost\":\"1000\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"93cd072f-e480-413f-89aa-af5d8f53efd9\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:46Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:controller.kubernetes.io/pod-deletion-cost\":{}},\"f:generateName\":{},\"f:labels\":{\""}
{"Time":"2023-05-26T03:19:46.481069106Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodDeletionCost/disabled-with-different-costs","Output":"y\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-hpb2q\",\"generateName\":\"rs-\",\"namespace\":\"disabled-with-different-costs\",\"uid\":\"22b6714f-1072-4e75-939a-5b2fbcb1effe\",\"resourceVersion\":\"63840\",\"creationTimestamp\":\"2023-05-26T03:19:46Z\",\"labels\":{\"foo\":\"bar\"},\"annotations\":{\"controller.kubernetes.io/pod-deletion-cost\":\"100\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"93cd072f-e480-413f-89aa-af5d8f53efd9\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:46Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:controller.kubernetes.io/pod-deletion-cost\":{}},\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:foo\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"93cd072f-e480-413f-89aa-af5d8f53efd9\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\""}
{"Time":"2023-05-26T03:19:46.4856166Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodDeletionCost/disabled-with-different-costs","Output":"},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-hpb2q\",\"generateName\":\"rs-\",\"namespace\":\"disabled-with-different-costs\",\"uid\":\"22b6714f-1072-4e75-939a-5b2fbcb1effe\",\"resourceVersion\":\"63841\",\"creationTimestamp\":\"2023-05-26T03:19:46Z\",\"labels\":{\"foo\":\"bar\"},\"annotations\":{\"controller.kubernetes.io/pod-deletion-cost\":\"100\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"93cd072f-e480-413f-89aa-af5d8f53efd9\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:46Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:controller.kubernetes.io/pod-deletion-cost\":{}},\"f:generateName\":{},\"f:labels\":{\".\""}
{"Time":"2023-05-26T03:19:48.470831685Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicationcontroller","Test":"TestDeletingAndFailedPods","Output":":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rc-7j65k\",\"generateName\":\"rc-\",\"namespace\":\"test-deleting-and-failed-pods\",\"uid\":\"846917fd-676a-4ef0-a6e7-e2795e2af001\",\"resourceVersion\":\"63941\",\"creationTimestamp\":\"2023-05-26T03:19:48Z\",\"labels\":{\"foo\":\"bar\"},\"ownerReferences\":[{\"apiVersion\":\"v1\",\"kind\":\"ReplicationController\",\"name\":\"rc\",\"uid\":\"2d1868b9-e6c0-4ed3-b8d2-24878e48fa43\",\"controller\":true,\"blockOwnerDeletion\":true}],\"finalizers\":[\"fake.example.com/blockDeletion\"],\"managedFields\":[{\"manager\":\"replicationcontroller.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:48Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:finalizers\":{\".\":{},\"v:\\\"fake.example.com/blockDeletion\\\"\":{}},\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:foo\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"2d1868b9-e6c0-4ed3-b8d2-24878e48fa43\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\""}
{"Time":"2023-05-26T03:19:55.217466356Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodOrphaningAndAdoptionWhenLabelsChange","Output":"conds\\\":{}}} }]},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{Container{Name:fake-name,Image:fakeimage,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:\u0026PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]S"}
{"Time":"2023-05-26T03:19:55.324566251Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodOrphaningAndAdoptionWhenLabelsChange","Output":"terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-54qtz\",\"generateName\":\"rs-\",\"namespace\":\"test-pod-orphaning-and-adoption-when-labels-change\",\"uid\":\"8efa0c97-e1e9-4427-bff2-174b7391f4d0\",\"resourceVersion\":\"64356\",\"creationTimestamp\":\"2023-05-26T03:19:55Z\",\"labels\":{\"new-foo\":\"new-bar\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"ReplicaSet\",\"name\":\"rs\",\"uid\":\"6639e840-5ea3-44a3-882f-1f4d9d17b3e5\",\"controller\":true,\"blockOwnerDeletion\":true}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:55Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:new-foo\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"6639e840-5ea3-44a3-882f-1f4d9d17b3e5\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"fake-name\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:nam"}
{"Time":"2023-05-26T03:19:55.329804078Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestPodOrphaningAndAdoptionWhenLabelsChange","Output":"conds\\\":{}}} }]},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{Container{Name:fake-name,Image:fakeimage,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:\u0026PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]S"}
{"Time":"2023-05-26T03:19:57.998167158Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicationcontroller","Test":"TestPodOrphaningAndAdoptionWhenLabelsChange","Output":"tionGracePeriodSeconds\\\":{}}} }]},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{Container{Name:fake-name,Image:fakeimage,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:\u0026PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGrou"}
{"Time":"2023-05-26T03:19:58.023157668Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicationcontroller","Test":"TestPodOrphaningAndAdoptionWhenLabelsChange","Output":"tionGracePeriodSeconds\\\":{}}} }]},Spec:PodSpec{Volumes:[]Volume{},Containers:[]Container{Container{Name:fake-name,Image:fakeimage,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:\u0026PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGrou"}
{"Time":"2023-05-26T03:19:59.977640122Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestGeneralPodAdoption","Output":"\":{},\"f:enableServiceLinks\":{},\"f:restartPolicy\":{},\"f:schedulerName\":{},\"f:securityContext\":{},\"f:terminationGracePeriodSeconds\":{}}}}]} curObjectMeta={\"name\":\"rs-tjfgf\",\"generateName\":\"rs-\",\"namespace\":\"test-general-pod-adoption\",\"uid\":\"4b377416-915a-4fb4-8ad8-caa23d52bd3b\",\"resourceVersion\":\"64774\",\"creationTimestamp\":\"2023-05-26T03:19:59Z\",\"labels\":{\"foo\":\"bar\"},\"ownerReferences\":[{\"apiVersion\":\"apps/v1\",\"kind\":\"StatefulSet\",\"name\":\"rs\",\"uid\":\"fe7434d3-dfda-483d-89ad-146bdf2934c4\",\"controller\":false}],\"managedFields\":[{\"manager\":\"replicaset.test\",\"operation\":\"Update\",\"apiVersion\":\"v1\",\"time\":\"2023-05-26T03:19:59Z\",\"fieldsType\":\"FieldsV1\",\"fieldsV1\":{\"f:metadata\":{\"f:generateName\":{},\"f:labels\":{\".\":{},\"f:foo\":{}},\"f:ownerReferences\":{\".\":{},\"k:{\\\"uid\\\":\\\"fe7434d3-dfda-483d-89ad-146bdf2934c4\\\"}\":{}}},\"f:spec\":{\"f:containers\":{\"k:{\\\"name\\\":\\\"fake-name\\\"}\":{\".\":{},\"f:image\":{},\"f:imagePullPolicy\":{},\"f:name\":{},\"f:resources\":{},\"f:terminationMessagePath\":{},\"f:terminationMessagePolicy\":{}}},\"f:dnsPolicy\":{},"}
... skipping 4314 lines ...
{"Time":"2023-05-26T03:28:57.806380105Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/volumescheduling","Test":"TestVolumeCapacityPriority","Output":"\t- \t\t\t\tAPIVersion: \"flowcontrol.apiserver.k8s.io/v1beta3\",\n"}
{"Time":"2023-05-26T03:28:57.806390958Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/volumescheduling","Test":"TestVolumeCapacityPriority","Output":"\t- \t\t\t\tFieldsV1:   s`{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:apf.kubernetes.io/auto`...,\n"}
{"Time":"2023-05-26T03:28:57.806815504Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/volumescheduling","Test":"TestVolumeCapacityPriority","Output":"\t- \t\t\t\tAPIVersion:  \"flowcontrol.apiserver.k8s.io/v1beta3\",\n"}
{"Time":"2023-05-26T03:28:57.806842051Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/volumescheduling","Test":"TestVolumeCapacityPriority","Output":"\t- \t\t\t\tAPIVersion: \"flowcontrol.apiserver.k8s.io/v1beta3\",\n"}
{"Time":"2023-05-26T03:28:57.806851116Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/volumescheduling","Test":"TestVolumeCapacityPriority","Output":"\t- \t\t\t\tFieldsV1:   s`{\"f:metadata\":{\"f:annotations\":{\".\":{},\"f:apf.kubernetes.io/auto`...,\n"}
{"Time":"2023-05-26T03:28:57.807319631Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/volumescheduling","Test":"TestVolumeCapacityPriority","Output":"\t- \t\t\t\tAPIVersion: \"flowcontrol.apiserver.k8s.io/v1beta3\",\n"}
{"Time":"2023-05-26T03:28:57.80733336Z","Action":"output","Package":"k8s.io/kubernetes/t{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2023-05-26T03:30:28Z"}
++ early_exit_handler
++ '[' -n 185 ']'
++ kill -TERM 185
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 4 lines ...