This job view page is being replaced by Spyglass soon. Check out the new job view.
PRareller: [fix] fix and refactor TestValidateStatefulSet and TestValidateStatefulSet test cases
ResultABORTED
Tests 0 failed / 134 succeeded
Started2022-05-23 15:29
Elapsed17m24s
Revision67efd81667dfd36a6742b141cafbbc2e586c1557
Refs 109809

No Test Failures!


Show 134 Passed Tests

Error lines from build-log.txt

... skipping 75 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 162: bogus-expected-to-fail: command not found
!!! [0523 15:35:41] Call tree:
!!! [0523 15:35:41]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0523 15:35:41]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0523 15:35:41]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:138 juLog(...)
!!! [0523 15:35:41]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:166 record_command(...)
!!! [0523 15:35:41]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0523 15:35:41] Running kubeadm tests
+++ [0523 15:35:43] Building go targets for linux/amd64
    k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static)
+++ [0523 15:35:46] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kubeadm (static)
+++ [0523 15:36:29] Building go targets for linux/amd64
... skipping 220 lines ...
    k8s.io/kubernetes/hack/make-rules/helpers/go2make (non-static)
+++ [0523 15:39:31] Building go targets for linux/amd64
    k8s.io/kubernetes/cmd/kube-controller-manager (static)
+++ [0523 15:39:59] Generate kubeconfig for controller-manager
+++ [0523 15:39:59] Starting controller-manager
I0523 15:40:00.604827   56592 serving.go:348] Generated self-signed cert in-memory
W0523 15:40:01.350556   56592 authentication.go:423] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0523 15:40:01.350591   56592 authentication.go:317] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0523 15:40:01.350599   56592 authentication.go:341] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0523 15:40:01.350611   56592 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0523 15:40:01.350621   56592 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0523 15:40:01.350638   56592 controllermanager.go:180] Version: v1.25.0-alpha.0.603+3218a0a6833d1c
I0523 15:40:01.350682   56592 controllermanager.go:182] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0523 15:40:01.352014   56592 secure_serving.go:210] Serving securely on [::]:10257
I0523 15:40:01.352169   56592 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0523 15:40:01.352325   56592 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
... skipping 13 lines ...
I0523 15:40:01.399373   56592 taint_manager.go:163] "Sending events to api server"
I0523 15:40:01.399450   56592 node_lifecycle_controller.go:505] Controller will reconcile labels.
W0523 15:40:01.399483   56592 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0523 15:40:01.399497   56592 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0523 15:40:01.399511   56592 controllermanager.go:593] Started "nodelifecycle"
W0523 15:40:01.399823   56592 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
E0523 15:40:01.399896   56592 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0523 15:40:01.399908   56592 controllermanager.go:571] Skipping "service"
W0523 15:40:01.400096   56592 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0523 15:40:01.400137   56592 controllermanager.go:593] Started "pvc-protection"
W0523 15:40:01.400275   56592 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0523 15:40:01.400301   56592 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0523 15:40:01.400319   56592 controllermanager.go:593] Started "serviceaccount"
... skipping 117 lines ...
I0523 15:40:01.430497   56592 namespace_controller.go:200] Starting namespace controller
I0523 15:40:01.430516   56592 shared_informer.go:255] Waiting for caches to sync for namespace
I0523 15:40:01.430639   56592 controllermanager.go:593] Started "csrapproving"
W0523 15:40:01.430661   56592 controllermanager.go:571] Skipping "nodeipam"
I0523 15:40:01.430802   56592 node_lifecycle_controller.go:77] Sending events to api server
I0523 15:40:01.430814   56592 certificate_controller.go:119] Starting certificate controller "csrapproving"
E0523 15:40:01.430824   56592 core.go:211] failed to start cloud node lifecycle controller: no cloud provider provided
I0523 15:40:01.430830   56592 shared_informer.go:255] Waiting for caches to sync for certificate-csrapproving
W0523 15:40:01.430836   56592 controllermanager.go:571] Skipping "cloud-node-lifecycle"
I0523 15:40:01.431096   56592 controllermanager.go:593] Started "ttl-after-finished"
I0523 15:40:01.431325   56592 controllermanager.go:593] Started "replicaset"
I0523 15:40:01.431435   56592 ttlafterfinished_controller.go:109] Starting TTL after finished controller
I0523 15:40:01.431455   56592 shared_informer.go:255] Waiting for caches to sync for TTL after finished
... skipping 71 lines ...
I0523 15:40:01.832205   56592 disruption.go:371] Sending events to api server.
I0523 15:40:01.837896   56592 shared_informer.go:262] Caches are synced for resource quota
I0523 15:40:02.203903   56592 shared_informer.go:262] Caches are synced for garbage collector
I0523 15:40:02.203934   56592 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0523 15:40:02.257551   56592 shared_informer.go:262] Caches are synced for garbage collector
node/127.0.0.1 created
W0523 15:40:02.375028   56592 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
+++ [0523 15:40:02] Checking kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short.  Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.603+3218a0a6833d1c", GitCommit:"3218a0a6833d1c402ccd41507af29ffb2f3376a6", GitTreeState:"clean", BuildDate:"2022-05-23T14:18:16Z", GoVersion:"go1.18.2", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"25+", GitVersion:"v1.25.0-alpha.0.603+3218a0a6833d1c", GitCommit:"3218a0a6833d1c402ccd41507af29ffb2f3376a6", GitTreeState:"clean", BuildDate:"2022-05-23T14:18:16Z", GoVersion:"go1.18.2", Compiler:"gc", Platform:"linux/amd64"}
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   37s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 196 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0523 15:40:07] Creating namespace namespace-1653320407-11012
namespace/namespace-1653320407-11012 created
Context "test" modified.
+++ [0523 15:40:07] Testing RESTMapper
+++ [0523 15:40:08] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 60 lines ...
namespace/namespace-1653320414-19864 created
Context "test" modified.
+++ [0523 15:40:14] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
(Bmessage:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 152 lines ...
namespace/namespace-1653320422-11343 created
Context "test" modified.
+++ [0523 15:40:22] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
(Bmessage:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 439 lines ...
has:valid-pod
Successful
(Bmessage:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:214: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 30 lines ...
I0523 15:40:34.282330   61386 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 1 milliseconds
I0523 15:40:34.283871   61386 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.name%3Dtest-pdb-2%2CinvolvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3De5cbe894-217e-46e1-a763-0dd715525565&limit=500 200 OK in 1 milliseconds
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 242 lines ...
core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.7:
(BSuccessful
(Bmessage:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0523 15:40:50] "kubectl patch with resourceVersion 589" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
(Bmessage:kubectl-replace
has:kubectl-replace
Successful
(Bmessage:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
(Bmessage:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0523 15:40:51.658457   56592 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0523 15:40:51.711256   56592 event.go:294] "Event occurred" object="node-v1-test" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node node-v1-test event: Registered Node node-v1-test in Controller"
core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
... skipping 30 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:3.7
    name: kubernetes-pause
has:localonlyvalue
core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 85 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0523 15:41:00] Creating namespace namespace-1653320460-13118
namespace/namespace-1653320460-13118 created
Context "test" modified.
+++ [0523 15:41:01] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 63 lines ...
	If true, keep the managedFields when printing objects in JSON or YAML format.

    --template='':
	Template string or path to template file to use when -o=go-template, -o=go-template-file. The template format is golang templates [http://golang.org/pkg/text/template/#pkg-overview].

    --validate='strict':
	Must be one of: strict (or true), warn, ignore (or false). 		"true" or "strict" will use a schema to validate the input and fail the request if invalid. It will perform server side validation if ServerSideFieldValidation is enabled on the api-server, but will fall back to less reliable client-side validation if not. 		"warn" will warn about unknown or duplicate fields without blocking the request if server-side field validation is enabled on the API server, and behave as "ignore" otherwise. 		"false" or "ignore" will not perform any schema validation, silently dropping any unknown or duplicate fields.

    --windows-line-endings=false:
	Only relevant if --edit=true. Defaults to the line ending native to your platform.

Usage:
  kubectl create -f FILENAME [options]
... skipping 38 lines ...
I0523 15:41:03.846472   56592 event.go:294] "Event occurred" object="namespace-1653320461-22204/test-deployment-retainkeys-54bb65fd55" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-54bb65fd55-d4lpr"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
(Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0523 15:41:04.775511   65089 helpers.go:650] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
pod/test-pod created (dry run)
pod/test-pod created (dry run)
... skipping 29 lines ...
(Bpod/b created
apply.sh:208: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:209: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
(Bmessage:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
pod/a created
pod/b created
I0523 15:41:13.525416   52986 alloc.go:327] "allocated clusterIPs" service="namespace-1653320461-22204/prune-svc" clusterIPs=map[IPv4:10.0.0.123]
service/prune-svc created
I0523 15:41:15.465221   56592 horizontal.go:360] Horizontal Pod Autoscaler frontend has been deleted in namespace-1653320458-3490
... skipping 37 lines ...
apply.sh:262: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod/b unchanged
pod/a pruned
apply.sh:266: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b:
(Bnamespace "nsb" deleted
Successful
(Bmessage:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:277: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:281: Successful get services a {{.metadata.name}}: a
(BSuccessful
(Bmessage:The Service "a" is invalid: spec.clusterIPs[0]: Invalid value: []string{"10.0.0.12"}: may not change once set
... skipping 28 lines ...
(Bapply.sh:303: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:304: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
(Bmessage:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:312: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
(Bmessage:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:320: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
I0523 15:41:42.719296   56592 namespace_controller.go:185] Namespace has been deleted nsb
apply.sh:326: Successful get configmaps --field-selector=metadata.name=foo {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:configmap/foo created
error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
ensure CRDs are installed first
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:332: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:338: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
... skipping 6 lines ...
(Bpod "pod-a" deleted
pod "pod-c" deleted
apply.sh:346: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:350: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: resource mapping not found for name: "foo" namespace: "" from "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
ensure CRDs are installed first
has:no matches for kind "Widget" in version "example.com/v1"
Successful
(Bmessage:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:356: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI0523 15:41:51.085352   56592 namespace_controller.go:185] Namespace has been deleted multi-resource-ns
I0523 15:41:51.404232   52986 controller.go:611] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
... skipping 32 lines ...
(Bmessage:866
has:866
pod "test-pod" deleted
apply.sh:415: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ [0523 15:41:54] Testing upgrade kubectl client-side apply to server-side apply
pod/test-pod created
error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
... skipping 75 lines ...
(Bpod "nginx-extensions" deleted
Successful
(Bmessage:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
(Bmessage:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0523 15:41:58] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
(Bmessage:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 29 lines ...
I0523 15:42:01.137964   56592 event.go:294] "Event occurred" object="namespace-1653320519-21922/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-6cf67855f7 to 3"
I0523 15:42:01.148059   56592 event.go:294] "Event occurred" object="namespace-1653320519-21922/nginx-6cf67855f7" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6cf67855f7-wrlb9"
I0523 15:42:01.154820   56592 event.go:294] "Event occurred" object="namespace-1653320519-21922/nginx-6cf67855f7" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6cf67855f7-wg994"
I0523 15:42:01.155037   56592 event.go:294] "Event occurred" object="namespace-1653320519-21922/nginx-6cf67855f7" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6cf67855f7-vg2tk"
apps.sh:154: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
(Bmessage:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1653320519-21922\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1653320519-21922"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I0523 15:42:09.660895   56592 event.go:294] "Event occurred" object="namespace-1653320519-21922/nginx" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-8458596ddd to 3"
I0523 15:42:09.668775   56592 event.go:294] "Event occurred" object="namespace-1653320519-21922/nginx-8458596ddd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-8458596ddd-ggfzg"
I0523 15:42:09.675314   56592 event.go:294] "Event occurred" object="namespace-1653320519-21922/nginx-8458596ddd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-8458596ddd-fvk6d"
I0523 15:42:09.675664   56592 event.go:294] "Event occurred" object="namespace-1653320519-21922/nginx-8458596ddd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-8458596ddd-7tllg"
Successful
... skipping 497 lines ...
+++ [0523 15:42:22] Creating namespace namespace-1653320542-28601
namespace/namespace-1653320542-28601 created
Context "test" modified.
+++ [0523 15:42:22] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:{
    "apiVersion": "v1",
    "items": [],
... skipping 21 lines ...
has not:No resources found
Successful
(Bmessage:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
(Bmessage:No resources found in namespace-1653320542-28601 namespace.
has:No resources found
Successful
(Bmessage:
has not:No resources found
Successful
(Bmessage:No resources found in namespace-1653320542-28601 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
(Bmessage:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
(Bmessage:Error from server (NotFound): pods "abc" not found
has not:List
Successful
(Bmessage:I0523 15:42:24.062114   68609 loader.go:372] Config loaded from file:  /tmp/tmp.PynbvlrBZy/.kube/config
I0523 15:42:24.066766   68609 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 4 milliseconds
I0523 15:42:24.101448   68609 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0523 15:42:24.102992   68609 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 527 lines ...