This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtkashem: apimachinery: add context bound polling
ResultABORTED
Tests 0 failed / 133 succeeded
Started2021-05-06 15:40
Elapsed27m44s
Revision8d48bc9cb07d529d46f293801fc0e7342ce2b396
Refs 101668

No Test Failures!


Show 133 Passed Tests

Error lines from build-log.txt

... skipping 70 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 157: bogus-expected-to-fail: command not found
!!! [0506 15:45:35] Call tree:
!!! [0506 15:45:35]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0506 15:45:35]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0506 15:45:35]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:133 juLog(...)
!!! [0506 15:45:35]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:161 record_command(...)
!!! [0506 15:45:35]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0506 15:45:36] Running kubeadm tests
+++ [0506 15:45:41] Building go targets for linux/amd64:
    cmd/kubeadm
+++ [0506 15:46:34] Running tests without code coverage
{"Time":"2021-05-06T15:48:13.209201515Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t49.493s\n"}
✓  cmd/kubeadm/test/cmd (49.496s)
... skipping 350 lines ...
+++ [0506 15:50:24] Building kube-controller-manager
+++ [0506 15:50:29] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [0506 15:50:58] Generate kubeconfig for controller-manager
+++ [0506 15:50:58] Starting controller-manager
I0506 15:50:59.195936   63669 serving.go:347] Generated self-signed cert in-memory
W0506 15:50:59.796011   63669 authentication.go:419] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0506 15:50:59.796069   63669 authentication.go:316] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0506 15:50:59.796077   63669 authentication.go:340] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0506 15:50:59.796090   63669 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0506 15:50:59.796108   63669 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0506 15:50:59.796132   63669 controllermanager.go:167] Version: v1.22.0-alpha.1.182+7126263cb1d693
I0506 15:50:59.797826   63669 secure_serving.go:195] Serving securely on [::]:10257
I0506 15:50:59.797898   63669 tlsconfig.go:240] Starting DynamicServingCertificateController
I0506 15:50:59.798720   63669 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0506 15:50:59.799142   63669 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
... skipping 78 lines ...
I0506 15:51:00.400316   63669 shared_informer.go:240] Waiting for caches to sync for resource quota
I0506 15:51:00.400348   63669 resource_quota_monitor.go:304] QuotaMonitor running
I0506 15:51:00.400690   63669 controllermanager.go:566] Started "replicaset"
I0506 15:51:00.400879   63669 replica_set.go:181] Starting replicaset controller
I0506 15:51:00.400912   63669 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
I0506 15:51:00.401032   63669 node_lifecycle_controller.go:76] Sending events to api server
E0506 15:51:00.401065   63669 core.go:231] failed to start cloud node lifecycle controller: no cloud provider provided
W0506 15:51:00.401076   63669 controllermanager.go:558] Skipping "cloud-node-lifecycle"
I0506 15:51:00.418129   63669 controllermanager.go:566] Started "namespace"
I0506 15:51:00.418238   63669 namespace_controller.go:200] Starting namespace controller
I0506 15:51:00.418253   63669 shared_informer.go:240] Waiting for caches to sync for namespace
I0506 15:51:00.418819   63669 controllermanager.go:566] Started "ttl-after-finished"
I0506 15:51:00.419024   63669 ttlafterfinished_controller.go:109] Starting TTL after finished controller
... skipping 28 lines ...
I0506 15:51:00.423052   63669 shared_informer.go:240] Waiting for caches to sync for attach detach
I0506 15:51:00.423253   63669 controllermanager.go:566] Started "pvc-protection"
W0506 15:51:00.423266   63669 controllermanager.go:545] "tokencleaner" is disabled
W0506 15:51:00.423274   63669 controllermanager.go:558] Skipping "nodeipam"
I0506 15:51:00.423460   63669 pvc_protection_controller.go:110] "Starting PVC protection controller"
I0506 15:51:00.423478   63669 shared_informer.go:240] Waiting for caches to sync for PVC protection
E0506 15:51:00.423633   63669 core.go:91] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0506 15:51:00.423646   63669 controllermanager.go:558] Skipping "service"
I0506 15:51:00.423656   63669 core.go:241] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0506 15:51:00.423664   63669 controllermanager.go:558] Skipping "route"
I0506 15:51:00.423913   63669 controllermanager.go:566] Started "podgc"
I0506 15:51:00.424059   63669 gc_controller.go:89] Starting GC controller
I0506 15:51:00.424076   63669 shared_informer.go:240] Waiting for caches to sync for GC
... skipping 38 lines ...
I0506 15:51:00.430571   63669 graph_builder.go:289] GraphBuilder running
I0506 15:51:00.430625   63669 deployment_controller.go:153] "Starting controller" controller="deployment"
I0506 15:51:00.430634   63669 shared_informer.go:240] Waiting for caches to sync for deployment
I0506 15:51:00.430664   63669 stateful_set.go:146] Starting stateful set controller
I0506 15:51:00.430677   63669 shared_informer.go:240] Waiting for caches to sync for stateful set
node/127.0.0.1 created
W0506 15:51:00.461516   63669 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
+++ [0506 15:51:00] Checking kubectl version
W0506 15:51:00.484018   63669 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0506 15:51:00.484108   63669 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0506 15:51:00.484147   63669 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0506 15:51:00.484182   63669 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0506 15:51:00.484529   63669 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
... skipping 38 lines ...
I0506 15:51:00.822512   63669 disruption.go:371] Sending events to api server.
I0506 15:51:00.889154   63669 shared_informer.go:247] Caches are synced for ReplicationController 
I0506 15:51:00.900675   63669 shared_informer.go:247] Caches are synced for resource quota 
I0506 15:51:00.928547   63669 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring 
I0506 15:51:00.929810   63669 shared_informer.go:247] Caches are synced for endpoint 
I0506 15:51:00.931154   63669 shared_informer.go:247] Caches are synced for resource quota 
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocated ip:10.0.0.1 with error:provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   38s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 100 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0506 15:51:06] Creating namespace namespace-1620316266-30216
namespace/namespace-1620316266-30216 created
Context "test" modified.
+++ [0506 15:51:06] Testing RESTMapper
+++ [0506 15:51:06] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 63 lines ...
namespace/namespace-1620316273-27493 created
Context "test" modified.
+++ [0506 15:51:13] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 152 lines ...
namespace/namespace-1620316283-16009 created
Context "test" modified.
+++ [0506 15:51:23] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 440 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BI0506 15:51:36.078109   59998 client.go:360] parsed scheme: "passthrough"
I0506 15:51:36.078222   59998 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0506 15:51:36.078238   59998 clientconn.go:948] ClientConn switching balancer to "pick_first"
core.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:214: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 35 lines ...
(BWarning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/test-pdb-3 created
core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(BWarning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/test-pdb-4 created
core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 234 lines ...
core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.4.1:
(BSuccessful
message:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0506 15:51:58] "kubectl patch with resourceVersion 605" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:kubectl-replace
has:kubectl-replace
Successful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
W0506 15:51:59.507042   63669 actual_state_of_world.go:534] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
node/node-v1-test created
core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
... skipping 31 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:3.4.1
    name: kubernetes-pause
has:localonlyvalue
core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 87 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0506 15:52:11] Creating namespace namespace-1620316331-1061
namespace/namespace-1620316331-1061 created
Context "test" modified.
+++ [0506 15:52:11] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 44 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0506 15:52:11] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

+++ Running case: test-cmd.run_kubectl_apply_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 29 lines ...
I0506 15:52:15.326611   63669 event.go:291] "Event occurred" object="namespace-1620316332-18668/test-deployment-retainkeys-8695b756f8" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-8695b756f8-pv6cb"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0506 15:52:16.578767   72262 helpers.go:569] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
pod/test-pod created (dry run)
pod/test-pod created (dry run)
... skipping 32 lines ...
(Bpod/b created
apply.sh:208: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:209: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
pod/a created
pod/b created
service/prune-svc created
Warning: batch/v1beta1 CronJob is deprecated in v1.21+, unavailable in v1.25+; use batch/v1 CronJob
I0506 15:52:25.801759   63669 horizontal.go:361] Horizontal Pod Autoscaler frontend has been deleted in namespace-1620316328-23390
... skipping 44 lines ...
apply.sh:266: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b:
(Bnamespace "nsb" deleted
I0506 15:52:48.335247   59998 client.go:360] parsed scheme: "passthrough"
I0506 15:52:48.335308   59998 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0506 15:52:48.335319   59998 clientconn.go:948] ClientConn switching balancer to "pick_first"
Successful
message:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:277: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:281: Successful get services a {{.metadata.name}}: a
(BSuccessful
message:The Service "a" is invalid: spec.clusterIPs[0]: Invalid value: []string{"10.0.0.12"}: may not change once set
... skipping 26 lines ...
(Bapply.sh:303: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:304: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
message:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:312: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
message:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:320: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
apply.sh:326: Successful get configmaps --field-selector=metadata.name=foo {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:configmap/foo created
error: unable to recognize "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:332: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:338: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:pod/pod-a created
... skipping 5 lines ...
(Bpod "pod-a" deleted
pod "pod-c" deleted
apply.sh:346: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:350: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: unable to recognize "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
has:no matches for kind "Widget" in version "example.com/v1"
I0506 15:53:06.167464   59998 client.go:360] parsed scheme: "endpoint"
I0506 15:53:06.167525   59998 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
Successful
message:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:356: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI0506 15:53:08.494191   59998 controller.go:611] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
I0506 15:53:08.526571   63669 namespace_controller.go:185] Namespace has been deleted multi-resource-ns
... skipping 32 lines ...
message:889
has:889
pod "test-pod" deleted
apply.sh:415: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ [0506 15:53:11] Testing upgrade kubectl client-side apply to server-side apply
pod/test-pod created
error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
... skipping 70 lines ...
+++ [0506 15:53:14] Testing kubectl run
pod/nginx-extensions created (dry run)
pod/nginx-extensions created (server dry run)
run.sh:32: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Brun.sh:35: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/nginx-extensions created
{"component":"entrypoint","file":"prow/entrypoint/run.go:169","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","severity":"error","time":"2021-05-06T15:53:15Z"}