This job view page is being replaced by Spyglass soon. Check out the new job view.
PRAbirdcfly: Automated cherry pick of #110652: fix: --chunk-size with selector returns missing result
ResultFAILURE
Tests 0 failed / 62 succeeded
Started2022-06-24 02:18
Elapsed2h16m
Revision2eed3eb9177536fe4012b715fcfc41b9c1d602c8
Refs 110757

No Test Failures!


Show 62 Passed Tests

Error lines from build-log.txt

... skipping 76 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 158: bogus-expected-to-fail: command not found
!!! [0624 02:24:28] Call tree:
!!! [0624 02:24:28]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0624 02:24:28]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0624 02:24:28]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:134 juLog(...)
!!! [0624 02:24:28]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:162 record_command(...)
!!! [0624 02:24:28]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0624 02:24:28] Running kubeadm tests
+++ [0624 02:24:33] Building go targets for linux/amd64:
    cmd/kubeadm
> static build CGO_ENABLED=0: k8s.io/kubernetes/cmd/kubeadm
+++ [0624 02:25:27] Running tests without code coverage 
{"Time":"2022-06-24T02:26:13.298007088Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t44.068s\n"}
... skipping 213 lines ...
+++ [0624 02:28:41] Building go targets for linux/amd64:
    cmd/kube-controller-manager
> static build CGO_ENABLED=0: k8s.io/kubernetes/cmd/kube-controller-manager
+++ [0624 02:29:10] Generate kubeconfig for controller-manager
+++ [0624 02:29:10] Starting controller-manager
I0624 02:29:10.668165   57859 serving.go:348] Generated self-signed cert in-memory
W0624 02:29:11.113328   57859 authentication.go:419] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0624 02:29:11.113386   57859 authentication.go:316] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0624 02:29:11.113394   57859 authentication.go:340] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0624 02:29:11.113407   57859 authorization.go:225] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0624 02:29:11.113426   57859 authorization.go:193] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0624 02:29:11.113446   57859 controllermanager.go:196] Version: v1.23.9-rc.0.3+f7b179d1cc2fc0
I0624 02:29:11.114982   57859 secure_serving.go:200] Serving securely on [::]:10257
I0624 02:29:11.115107   57859 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0624 02:29:11.115327   57859 leaderelection.go:248] attempting to acquire leader lease kube-system/kube-controller-manager...
I0624 02:29:11.132057   54309 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
... skipping 87 lines ...
I0624 02:29:11.283921   57859 replica_set.go:186] Starting replicationcontroller controller
I0624 02:29:11.284486   57859 shared_informer.go:240] Waiting for caches to sync for ReplicationController
W0624 02:29:11.285151   57859 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0624 02:29:11.285390   57859 controllermanager.go:605] Started "csrcleaner"
I0624 02:29:11.285650   57859 cleaner.go:82] Starting CSR cleaner controller
I0624 02:29:11.286256   57859 node_lifecycle_controller.go:77] Sending events to api server
E0624 02:29:11.286292   57859 core.go:212] failed to start cloud node lifecycle controller: no cloud provider provided
W0624 02:29:11.286305   57859 controllermanager.go:583] Skipping "cloud-node-lifecycle"
I0624 02:29:11.286707   57859 controllermanager.go:605] Started "persistentvolume-expander"
I0624 02:29:11.286762   57859 expand_controller.go:342] Starting expand controller
I0624 02:29:11.286776   57859 shared_informer.go:240] Waiting for caches to sync for expand
I0624 02:29:11.286968   57859 controllermanager.go:605] Started "pvc-protection"
I0624 02:29:11.287108   57859 pvc_protection_controller.go:103] "Starting PVC protection controller"
... skipping 56 lines ...
I0624 02:29:11.304914   57859 controllermanager.go:605] Started "statefulset"
I0624 02:29:11.304939   57859 stateful_set.go:147] Starting stateful set controller
I0624 02:29:11.304952   57859 shared_informer.go:240] Waiting for caches to sync for stateful set
I0624 02:29:11.305168   57859 controllermanager.go:605] Started "ttl"
I0624 02:29:11.305389   57859 ttl_controller.go:121] Starting TTL controller
I0624 02:29:11.305409   57859 shared_informer.go:240] Waiting for caches to sync for TTL
E0624 02:29:11.305492   57859 core.go:92] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0624 02:29:11.305520   57859 controllermanager.go:583] Skipping "service"
I0624 02:29:11.305852   57859 controllermanager.go:605] Started "endpoint"
I0624 02:29:11.305975   57859 endpoints_controller.go:193] Starting endpoint controller
I0624 02:29:11.306007   57859 shared_informer.go:240] Waiting for caches to sync for endpoint
I0624 02:29:11.309008   57859 shared_informer.go:240] Waiting for caches to sync for resource quota
W0624 02:29:11.336206   57859 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
... skipping 42 lines ...
I0624 02:29:11.706179   57859 shared_informer.go:247] Caches are synced for endpoint 
I0624 02:29:11.709413   57859 shared_informer.go:247] Caches are synced for resource quota 
I0624 02:29:12.139070   57859 shared_informer.go:247] Caches are synced for garbage collector 
I0624 02:29:12.205737   57859 shared_informer.go:247] Caches are synced for garbage collector 
I0624 02:29:12.205775   57859 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
node/127.0.0.1 created
W0624 02:29:12.851865   57859 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
+++ [0624 02:29:12] Checking kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.9-rc.0.3+f7b179d1cc2fc0", GitCommit:"f7b179d1cc2fc003faaf7c41d5a491bd0947211f", GitTreeState:"clean", BuildDate:"2022-06-16T06:43:47Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.9-rc.0.3+f7b179d1cc2fc0", GitCommit:"f7b179d1cc2fc003faaf7c41d5a491bd0947211f", GitTreeState:"clean", BuildDate:"2022-06-16T06:43:47Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"}
The Service "kubernetes" is invalid: spec.clusterIPs: Invalid value: []string{"10.0.0.1"}: failed to allocate IP 10.0.0.1: provided IP is already allocated
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.0.0.1     <none>        443/TCP   36s
Recording: run_kubectl_version_tests
Running command: run_kubectl_version_tests

+++ Running case: test-cmd.run_kubectl_version_tests 
... skipping 114 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0624 02:29:17] Creating namespace namespace-1656037757-28677
namespace/namespace-1656037757-28677 created
Context "test" modified.
+++ [0624 02:29:18] Testing RESTMapper
+++ [0624 02:29:18] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIVERSION                             NAMESPACED   KIND
bindings                                       v1                                     true         Binding
componentstatuses                 cs           v1                                     false        ComponentStatus
configmaps                        cm           v1                                     true         ConfigMap
endpoints                         ep           v1                                     true         Endpoints
... skipping 61 lines ...
namespace/namespace-1656037768-6809 created
Context "test" modified.
+++ [0624 02:29:28] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 64 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 152 lines ...
namespace/namespace-1656037777-25823 created
Context "test" modified.
+++ [0624 02:29:37] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:159: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:160: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:161: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 440 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name was specified
core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:210: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:214: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:219: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 30 lines ...
I0624 02:29:52.367180   62684 round_trippers.go:553] GET https://127.0.0.1:6443/apis/policy/v1/namespaces/test-kubectl-describe-pod/poddisruptionbudgets/test-pdb-2 200 OK in 1 milliseconds
I0624 02:29:52.368978   62684 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/test-kubectl-describe-pod/events?fieldSelector=involvedObject.name%3Dtest-pdb-2%2CinvolvedObject.namespace%3Dtest-kubectl-describe-pod%2CinvolvedObject.kind%3DPodDisruptionBudget%2CinvolvedObject.uid%3Dfe841381-0667-4c00-a5d0-680ae9fc1356&limit=500 200 OK in 1 milliseconds
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:271: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:275: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:281: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 242 lines ...
core.sh:542: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.6:
(BSuccessful
message:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0624 02:30:10] "kubectl patch with resourceVersion 610" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:586: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:kubectl-replace
has:kubectl-replace
Successful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
W0624 02:30:11.912085   57859 actual_state_of_world.go:539] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
node/node-v1-test created
core.sh:614: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:639: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
... skipping 30 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:3.6
    name: kubernetes-pause
has:localonlyvalue
core.sh:691: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:695: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:699: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:703: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:707: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 84 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0624 02:30:23] Creating namespace namespace-1656037823-29353
namespace/namespace-1656037823-29353 created
Context "test" modified.
+++ [0624 02:30:23] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 84 lines ...
I0624 02:30:26.535822   57859 event.go:294] "Event occurred" object="namespace-1656037823-6504/test-deployment-retainkeys-8494bdd944" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-deployment-retainkeys-8494bdd944-mqt87"
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0624 02:30:27.648893   66456 helpers.go:612] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
pod/test-pod created (dry run)
pod/test-pod created (dry run)
... skipping 29 lines ...
(Bpod/b created
apply.sh:208: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:209: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
pod/a created
pod/b created
I0624 02:30:35.679054   54309 alloc.go:329] "allocated clusterIPs" service="namespace-1656037823-6504/prune-svc" clusterIPs=map[IPv4:10.0.0.211]
service/prune-svc created
I0624 02:30:37.427448   57859 horizontal.go:360] Horizontal Pod Autoscaler frontend has been deleted in namespace-1656037820-16130
... skipping 37 lines ...
apply.sh:262: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod/b unchanged
pod/a pruned
apply.sh:266: Successful get pods -n nsb {{range.items}}{{.metadata.name}}:{{end}}: b:
(Bnamespace "nsb" deleted
Successful
message:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:277: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:281: Successful get services a {{.metadata.name}}: a
(BSuccessful
message:The Service "a" is invalid: spec.clusterIPs[0]: Invalid value: []string{"10.0.0.12"}: may not change once set
... skipping 28 lines ...
(Bapply.sh:303: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:304: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
message:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:312: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
message:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:320: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
I0624 02:31:05.481452   57859 namespace_controller.go:185] Namespace has been deleted nsb
apply.sh:326: Successful get configmaps --field-selector=metadata.name=foo {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:configmap/foo created
error: unable to recognize "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:332: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:338: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:pod/pod-a created
... skipping 5 lines ...
(Bpod "pod-a" deleted
pod "pod-c" deleted
apply.sh:346: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:350: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: unable to recognize "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
has:no matches for kind "Widget" in version "example.com/v1"
Successful
message:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:356: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI0624 02:31:14.517748   57859 namespace_controller.go:185] Namespace has been deleted multi-resource-ns
I0624 02:31:15.888893   54309 controller.go:611] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
... skipping 32 lines ...
message:890
has:890
pod "test-pod" deleted
apply.sh:415: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(B+++ [0624 02:31:18] Testing upgrade kubectl client-side apply to server-side apply
pod/test-pod created
error: Apply failed with 1 conflict: conflict with "kubectl-client-side-apply" using v1: .metadata.labels.name
Please review the fields above--they currently have other managers. Here
are the ways you can resolve this warning:
* If you intend to manage all of these fields, please re-run the apply
  command with the `--force-conflicts` flag.
* If you do not intend to manage all of the fields, please edit your
  manifest to remove references to the fields that should keep their
... skipping 75 lines ...
(Bpod "nginx-extensions" deleted
Successful
message:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0624 02:31:23] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 29 lines ...
I0624 02:31:26.945404   57859 event.go:294] "Event occurred" object="namespace-1656037884-7163/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-67874f748d to 3"
I0624 02:31:26.957964   57859 event.go:294] "Event occurred" object="namespace-1656037884-7163/nginx-67874f748d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-67874f748d-mnkq4"
I0624 02:31:26.967465   57859 event.go:294] "Event occurred" object="namespace-1656037884-7163/nginx-67874f748d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-67874f748d-2mtp6"
I0624 02:31:26.967886   57859 event.go:294] "Event occurred" object="namespace-1656037884-7163/nginx-67874f748d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-67874f748d-hln4n"
apps.sh:154: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1656037884-7163\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1656037884-7163"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I0624 02:31:35.532607   57859 event.go:294] "Event occurred" object="namespace-1656037884-7163/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-84cc67fc67 to 3"
I0624 02:31:35.544228   57859 event.go:294] "Event occurred" object="namespace-1656037884-7163/nginx-84cc67fc67" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-84cc67fc67-xblhs"
I0624 02:31:35.553323   57859 event.go:294] "Event occurred" object="namespace-1656037884-7163/nginx-84cc67fc67" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-84cc67fc67-bzmqv"
I0624 02:31:35.553519   57859 event.go:294] "Event occurred" object="namespace-1656037884-7163/nginx-84cc67fc67" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-84cc67fc67-gjm72"
Successful
... skipping 300 lines ...
+++ [0624 02:31:43] Creating namespace namespace-1656037903-28656
namespace/namespace-1656037903-28656 created
Context "test" modified.
+++ [0624 02:31:43] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1656037903-28656 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1656037903-28656 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0624 02:31:45.724168   69928 loader.go:372] Config loaded from file:  /tmp/tmp.Vv3xOocu4r/.kube/config
I0624 02:31:45.732192   69928 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 7 milliseconds
I0624 02:31:45.773440   69928 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0624 02:31:45.775264   69928 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 594 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2022-06-24T02:31:53Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2022-06-24T02:31:53Z"}}, "name":"valid-pod", "namespace":"namespace-1656037913-8639", "resourceVersion":"1056", "uid":"40acc2c0-8149-4cd5-accb-058d145c8626"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "preemptionPolicy":"PreemptLowerPriority", "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2022-06-24T02:31:53Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2022-06-24T02:31:53Z"}],"name":"valid-pod","namespace":"namespace-1656037913-8639","resourceVersion":"1056","uid":"40acc2c0-8149-4cd5-accb-058d145c8626"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority","priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2022-06-24T02:31:53Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2022-06-24T02:31:53Z]] name:valid-pod namespace:namespace-1656037913-8639 resourceVersion:1056 uid:40acc2c0-8149-4cd5-accb-058d145c8626] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true preemptionPolicy:PreemptLowerPriority priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
... skipping 84 lines ...
  terminationGracePeriodSeconds: 30
status:
  phase: Pending
  qosClass: Guaranteed
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 37 lines ...
+++ [0624 02:31:59] Creating namespace namespace-1656037919-22895
namespace/namespace-1656037919-22895 created
Context "test" modified.
+++ [0624 02:31:59] Testing kubectl exec POD COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [0624 02:31:59] Creating namespace namespace-1656037919-4982
namespace/namespace-1656037919-4982 created
Context "test" modified.
+++ [0624 02:31:59] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0624 02:32:01.086012   57859 event.go:294] "Event occurred" object="namespace-1656037919-4982/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-9fbtr"
I0624 02:32:01.099626   57859 event.go:294] "Event occurred" object="namespace-1656037919-4982/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-fsqc6"
I0624 02:32:01.099671   57859 event.go:294] "Event occurred" object="namespace-1656037919-4982/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-n6nfs"
configmap/test-set-env-config created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-9fbtr does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-9fbtr does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"d5961936-5428-43f2-a279-a3ec4ef12537","resourceVersion":"1136","creationTimestamp":"2022-06-24T02:32:02Z"}}
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"d5961936-5428-43f2-a279-a3ec4ef12537","resourceVersion":"1137","creationTimestamp":"2022-06-24T02:32:02Z"},"data":{"key1":"config1"}}
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","uid":"d5961936-5428-43f2-a279-a3ec4ef12537","resourceVersion":"1137","creationTimestamp":"2022-06-24T02:32:02Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"d5961936-5428-43f2-a279-a3ec4ef12537"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 74 lines ...
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}
has:apps/v1beta1
deployment.apps "nginx" deleted
Successful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
Successful
message:nginx:
has:nginx:
+++ exit code: 0
Recording: run_kubectl_delete_allnamespaces_tests
... skipping 104 lines ...
has:Timeout
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 163 lines ...
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:282: Successful get foos/test {{.patched}}: value2
(BFlag --record has been deprecated, --record will be removed in the future
foo.company.com/test patched
crd.sh:284: Successful get foos/test {{.patched}}: <no value>
(B+++ [0624 02:32:17] "kubectl patch --local" returns error as expected for CustomResource: error: strategic merge patch is not supported for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 312 lines ...
(Bcrd.sh:505: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:510: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:513: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
+++ [0624 02:32:32] Testing recursive resources
+++ [0624 02:32:32] Creating namespace namespace-1656037952-25868
namespace/namespace-1656037952-25868 created
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0624 02:32:33.174097   54309 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E0624 02:32:33.175801   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0624 02:32:33.295798   54309 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E0624 02:32:33.297461   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0624 02:32:33.413318   54309 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E0624 02:32:33.415002   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0624 02:32:33.532298   54309 cacher.go:150] Terminating all watchers from cacher *unstructured.Unstructured
E0624 02:32:33.533753   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0624 02:32:34.064789   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:34.064835   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0624 02:32:34.110442   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:34.110500   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0624 02:32:34.281863   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:34.281901   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:Name:         busybox0
Namespace:    namespace-1656037952-25868
Priority:     0
Node:         <none>
... skipping 155 lines ...
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0624 02:32:34.791207   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:34.791240   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:Warning: resource pods/busybox0 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox0 configured
Warning: resource pods/busybox1 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:264: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:273: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:278: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:283: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:288: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:293: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:297: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:302: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0624 02:32:36.646097   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:36.646140   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/busybox0 created
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0624 02:32:36.673429   57859 event.go:294] "Event occurred" object="namespace-1656037952-25868/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-lj4t5"
I0624 02:32:36.683988   57859 event.go:294] "Event occurred" object="namespace-1656037952-25868/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-btxbp"
generic-resources.sh:306: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0624 02:32:36.807195   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:36.807238   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0624 02:32:36.841385   57859 namespace_controller.go:185] Namespace has been deleted non-native-resources
generic-resources.sh:311: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:312: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:313: Successful get rc busybox1 {{.spec.replicas}}: 1
(BW0624 02:32:37.066819   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:37.066856   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0624 02:32:37.216406   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:37.216439   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:318: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80
(Bgeneric-resources.sh:319: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:328: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:329: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0624 02:32:37.954474   54309 alloc.go:329] "allocated clusterIPs" service="namespace-1656037952-25868/busybox0" clusterIPs=map[IPv4:10.0.0.248]
I0624 02:32:37.983956   54309 alloc.go:329] "allocated clusterIPs" service="namespace-1656037952-25868/busybox1" clusterIPs=map[IPv4:10.0.0.220]
generic-resources.sh:333: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:334: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:340: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:341: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:342: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0624 02:32:38.547623   57859 event.go:294] "Event occurred" object="namespace-1656037952-25868/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-kqg96"
I0624 02:32:38.598361   57859 event.go:294] "Event occurred" object="namespace-1656037952-25868/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-xkmqd"
generic-resources.sh:346: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:347: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:356: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:361: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0624 02:32:39.341475   57859 event.go:294] "Event occurred" object="namespace-1656037952-25868/nginx1-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx1-deployment-7997c7844 to 2"
I0624 02:32:39.381754   57859 event.go:294] "Event occurred" object="namespace-1656037952-25868/nginx0-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx0-deployment-c8966c5d7 to 2"
I0624 02:32:39.381828   57859 event.go:294] "Event occurred" object="namespace-1656037952-25868/nginx1-deployment-7997c7844" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-7997c7844-mzglh"
I0624 02:32:39.395561   57859 event.go:294] "Event occurred" object="namespace-1656037952-25868/nginx1-deployment-7997c7844" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx1-deployment-7997c7844-qmqhj"
I0624 02:32:39.397346   57859 event.go:294] "Event occurred" object="namespace-1656037952-25868/nginx0-deployment-c8966c5d7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-c8966c5d7-p5zm4"
I0624 02:32:39.408777   57859 event.go:294] "Event occurred" object="namespace-1656037952-25868/nginx0-deployment-c8966c5d7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx0-deployment-c8966c5d7-kvchx"
generic-resources.sh:365: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:378: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0624 02:32:40.481970   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:40.482012   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0624 02:32:40.574839   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:40.574879   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0624 02:32:41.075145   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:41.075181   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:400: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0624 02:32:41.670252   57859 event.go:294] "Event occurred" object="namespace-1656037952-25868/busybox0" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox0-5d7n4"
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0624 02:32:41.681513   57859 event.go:294] "Event occurred" object="namespace-1656037952-25868/busybox1" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox1-c92mj"
generic-resources.sh:404: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 4 lines ...
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
I0624 02:32:41.874494   57859 shared_informer.go:240] Waiting for caches to sync for resource quota
I0624 02:32:41.874543   57859 shared_informer.go:247] Caches are synced for resource quota 
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0624 02:32:42.203179   57859 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0624 02:32:42.203262   57859 shared_informer.go:247] Caches are synced for garbage collector 
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0624 02:32:43] Testing kubectl(v1:namespaces)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
W0624 02:32:43.430123   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:43.430162   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace created (server dry run)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1471: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bquery for namespaces had limit param
query for resourcequotas had limit param
query for limitranges had limit param
... skipping 123 lines ...
I0624 02:32:43.971471   75100 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1656037930-26402/resourcequotas?limit=500 200 OK in 1 milliseconds
I0624 02:32:43.972694   75100 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1656037930-26402/limitranges?limit=500 200 OK in 1 milliseconds
I0624 02:32:43.974246   75100 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1656037952-25868 200 OK in 1 milliseconds
I0624 02:32:43.975493   75100 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1656037952-25868/resourcequotas?limit=500 200 OK in 1 milliseconds
I0624 02:32:43.976692   75100 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/namespace-1656037952-25868/limitranges?limit=500 200 OK in 1 milliseconds
(Bnamespace "my-namespace" deleted
W0624 02:32:48.035452   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:48.035488   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1482: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
... skipping 31 lines ...
namespace "namespace-1656037924-17481" deleted
namespace "namespace-1656037924-18839" deleted
namespace "namespace-1656037925-8084" deleted
namespace "namespace-1656037927-21923" deleted
namespace "namespace-1656037930-26402" deleted
namespace "namespace-1656037952-25868" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1656037754-23909" deleted
... skipping 29 lines ...
namespace "namespace-1656037924-17481" deleted
namespace "namespace-1656037924-18839" deleted
namespace "namespace-1656037925-8084" deleted
namespace "namespace-1656037927-21923" deleted
namespace "namespace-1656037930-26402" deleted
namespace "namespace-1656037952-25868" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1489: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1490: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(BW0624 02:32:50.579159   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:50.579198   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
resourcequota/test-quota created (dry run)
resourcequota/test-quota created (server dry run)
core.sh:1494: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(BW0624 02:32:50.851896   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:50.851943   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
resourcequota/test-quota created
core.sh:1497: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: found:
(Bquery for resourcequotas had limit param
query for resourcequotas had user-specified limit param
Successful describe resourcequotas verbose logs:
I0624 02:32:51.028313   75308 loader.go:372] Config loaded from file:  /tmp/tmp.Vv3xOocu4r/.kube/config
... skipping 2 lines ...
I0624 02:32:51.067543   75308 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/quotas/resourcequotas/test-quota 200 OK in 1 milliseconds
(BI0624 02:32:51.215235   57859 resource_quota_controller.go:311] Resource quota has been deleted quotas/test-quota
resourcequota "test-quota" deleted
namespace "quotas" deleted
I0624 02:32:52.141102   57859 horizontal.go:360] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1656037952-25868
I0624 02:32:52.182499   57859 horizontal.go:360] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1656037952-25868
W0624 02:32:52.772282   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:32:52.772317   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1511: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1515: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1519: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1523: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1525: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1532: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1536: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 33 lines ...
I0624 02:33:02.897675   57859 namespace_controller.go:185] Namespace has been deleted namespace-1656037925-8084
I0624 02:33:02.973392   57859 namespace_controller.go:185] Namespace has been deleted namespace-1656037930-26402
I0624 02:33:02.986742   57859 namespace_controller.go:185] Namespace has been deleted namespace-1656037927-21923
I0624 02:33:03.017142   57859 namespace_controller.go:185] Namespace has been deleted namespace-1656037919-4982
I0624 02:33:03.042414   57859 namespace_controller.go:185] Namespace has been deleted quotas
I0624 02:33:03.230497   57859 namespace_controller.go:185] Namespace has been deleted namespace-1656037952-25868
W0624 02:33:04.238924   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:33:04.238962   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_secrets_test
Running command: run_secrets_test

+++ Running case: test-cmd.run_secrets_test 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 72 lines ...
secret/test-secret created
core.sh:896: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:897: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/tls
(Bsecret "test-secret" deleted
secret/secret-string-data created
core.sh:919: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(BW0624 02:33:07.617550   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:33:07.617586   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:920: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:921: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:930: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
W0624 02:33:09.415216   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:33:09.415264   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0624 02:33:09.419399   57859 namespace_controller.go:185] Namespace has been deleted other
W0624 02:33:10.800242   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:33:10.800277   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 43 lines ...
+++ command: run_client_config_tests
+++ [0624 02:33:20] Creating namespace namespace-1656038000-16981
namespace/namespace-1656038000-16981 created
Context "test" modified.
+++ [0624 02:33:20] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 58 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 54 lines ...
                  job-name=test-job
Annotations:      cronjob.kubernetes.io/instantiate: manual
Parallelism:      1
Completions:      1
Completion Mode:  NonIndexed
Start Time:       Fri, 24 Jun 2022 02:33:29 +0000
Pods Statuses:    1 Active / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=6251f526-ba83-499e-a1c4-ff0e91a161af
           job-name=test-job
  Containers:
   pi:
    Image:      k8s.gcr.io/perl
... skipping 471 lines ...
  type: ClusterIP
status:
  loadBalancer: {}
Successful
message:kubectl-create kubectl-set
has:kubectl-set
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1034: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
I0624 02:33:40.252400   57859 namespace_controller.go:185] Namespace has been deleted test-jobs
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:1047: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:1054: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1058: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BI0624 02:33:41.062218   54309 alloc.go:329] "allocated clusterIPs" service="default/redis-master" clusterIPs=map[IPv4:10.0.0.148]
... skipping 20 lines ...
redis-master   1934
redis-slave    1938
has:redis-master
core.sh:1121: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:redis-slave:
(Bservice "redis-master" deleted
service "redis-slave" deleted
W0624 02:33:43.271827   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:33:43.271863   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1128: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1132: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/beep-boop created (dry run)
service/beep-boop created (server dry run)
core.sh:1136: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/beep-boop created
... skipping 21 lines ...
core.sh:1176: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
(BSuccessful
message:kubectl-expose
has:kubectl-expose
service "exposemetadata" deleted
service "testmetadata" deleted
W0624 02:33:45.396687   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:33:45.396748   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod "testmetadata" deleted
+++ exit code: 0
Recording: run_daemonset_tests
Running command: run_daemonset_tests

+++ Running case: test-cmd.run_daemonset_tests 
... skipping 76 lines ...
(Bapps.sh:90: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:91: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:95: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:99: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(BW0624 02:33:49.932528   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:33:49.932567   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:100: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
apps.sh:103: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:104: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:105: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
... skipping 6 lines ...
+++ command: run_rc_tests
+++ [0624 02:33:50] Creating namespace namespace-1656038030-1121
namespace/namespace-1656038030-1121 created
Context "test" modified.
+++ [0624 02:33:50] Testing kubectl(v1:replicationcontrollers)
core.sh:1205: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0624 02:33:50.754551   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:33:50.754588   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/frontend created
I0624 02:33:50.936494   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-t8wg8"
I0624 02:33:50.977461   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-f4r54"
I0624 02:33:50.977504   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-ws6km"
replicationcontroller "frontend" deleted
E0624 02:33:51.044209   57859 replica_set.go:536] sync "namespace-1656038030-1121/frontend" failed with Operation cannot be fulfilled on replicationcontrollers "frontend": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1656038030-1121/frontend, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: b2fecfa4-c98b-40dc-ae92-71fdddcc446d, UID in object meta: 
core.sh:1210: Successful get pods -l "name=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:1214: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/frontend created
I0624 02:33:51.480840   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-d9ck9"
I0624 02:33:51.495183   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-w2p8x"
I0624 02:33:51.495232   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-8nj4r"
... skipping 11 lines ...
Namespace:    namespace-1656038030-1121
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1656038030-1121
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1656038030-1121
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1656038030-1121
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1656038030-1121
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1656038030-1121
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1656038030-1121
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1656038030-1121
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 25 lines ...
(Bcore.sh:1240: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0624 02:33:52.723819   57859 replica_set.go:205] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1656038030-1121  42b0432e-70e8-41c0-8533-6602a0bb340b 2053 2 2022-06-24 02:33:51 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  [{kubectl Update v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {kube-controller-manager Update v1 2022-06-24 02:33:51 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status} {kubectl-create Update v1 2022-06-24 02:33:51 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:selector":{},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002c9d058 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0624 02:33:52.779334   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: frontend-w2p8x"
core.sh:1244: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1248: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1252: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1256: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0624 02:33:53.268158   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/frontend" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-vjgww"
core.sh:1260: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1264: Successful get rc frontend {{.spec.replicas}}: 3
... skipping 15 lines ...
I0624 02:33:54.357327   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/redis-slave" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-mb6vg"
I0624 02:33:54.381826   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/redis-slave" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: redis-slave-mfdnr"
core.sh:1278: Successful get rc redis-master {{.spec.replicas}}: 4
(Bcore.sh:1279: Successful get rc redis-slave {{.spec.replicas}}: 4
(Breplicationcontroller "redis-master" deleted
replicationcontroller "redis-slave" deleted
E0624 02:33:54.675802   57859 replica_set.go:536] sync "namespace-1656038030-1121/redis-slave" failed with Operation cannot be fulfilled on replicationcontrollers "redis-slave": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1656038030-1121/redis-slave, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 269dfbeb-9ac0-4275-9b8c-0e3ac4108145, UID in object meta: 
deployment.apps/nginx-deployment created
I0624 02:33:54.888844   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-65547564b5 to 3"
I0624 02:33:54.901849   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-65547564b5-8qrw2"
I0624 02:33:54.913021   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-65547564b5-l7gcs"
I0624 02:33:54.913060   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-65547564b5-dp5nn"
deployment.apps/nginx-deployment scaled
... skipping 5 lines ...
I0624 02:33:56.336974   54309 alloc.go:329] "allocated clusterIPs" service="namespace-1656038030-1121/expose-test-deployment" clusterIPs=map[IPv4:10.0.0.144]
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0624 02:33:56.775469   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-65547564b5 to 3"
I0624 02:33:56.824171   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-65547564b5-brn22"
I0624 02:33:56.854449   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-65547564b5" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-65547564b5-qhf69"
... skipping 25 lines ...
(Bpod "valid-pod" deleted
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
I0624 02:33:59.467939   54309 alloc.go:329] "allocated clusterIPs" service="namespace-1656038030-1121/kubernetes-serve-hostname-testing-sixty-three-characters-in-len" clusterIPs=map[IPv4:10.0.0.242]
Successful
... skipping 32 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1403: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1407: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1416: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 24 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0624 02:34:02.857583   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-5f9f5f86d to 3"
I0624 02:34:02.869741   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-resources-5f9f5f86d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f9f5f86d-tnw2x"
I0624 02:34:02.880467   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-resources-5f9f5f86d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f9f5f86d-4qmrx"
I0624 02:34:02.880585   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-resources-5f9f5f86d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-5f9f5f86d-fdchx"
core.sh:1422: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1423: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1424: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0624 02:34:03.267294   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-7b667bb8bd to 1"
I0624 02:34:03.312298   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-resources-7b667bb8bd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-7b667bb8bd-6cx7t"
core.sh:1427: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1428: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0624 02:34:03.681828   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-resources-5f9f5f86d to 2"
I0624 02:34:03.734743   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-resources" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-resources-77dcc977fd to 1"
I0624 02:34:03.746306   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-resources-5f9f5f86d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-resources-5f9f5f86d-tnw2x"
I0624 02:34:03.747041   57859 event.go:294] "Event occurred" object="namespace-1656038030-1121/nginx-deployment-resources-77dcc977fd" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-resources-77dcc977fd-vwvdh"
core.sh:1433: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 155 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1444: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1445: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1446: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 46 lines ...
                pod-template-hash=d789bf5f8
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=d789bf5f8
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 123 lines ...
apps.sh:311: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(B    Image:	k8s.gcr.io/nginx:test-cmd
deployment.apps/nginx rolled back (server dry run)
apps.sh:315: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
apps.sh:319: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:322: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
apps.sh:326: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0624 02:34:15.369557   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-68d6dd6c9d to 2"
I0624 02:34:15.397015   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-6b9786596d to 1"
I0624 02:34:15.406545   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx-68d6dd6c9d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-68d6dd6c9d-f2x46"
I0624 02:34:15.406577   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx-6b9786596d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-6b9786596d-8rq2g"
Successful
... skipping 62 lines ...
I0624 02:34:16.788237   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx2" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx2-65db79f59d to 3"
I0624 02:34:16.790283   57859 horizontal.go:360] Horizontal Pod Autoscaler frontend has been deleted in namespace-1656038030-1121
I0624 02:34:16.802545   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx2-65db79f59d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx2-65db79f59d-hbr9f"
I0624 02:34:16.812814   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx2-65db79f59d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx2-65db79f59d-fzjm4"
I0624 02:34:16.813383   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx2-65db79f59d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx2-65db79f59d-jgxzj"
deployment.apps "nginx2" deleted
E0624 02:34:16.915369   57859 replica_set.go:536] sync "namespace-1656038045-4833/nginx2-65db79f59d" failed with Operation cannot be fulfilled on replicasets.apps "nginx2-65db79f59d": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1656038045-4833/nginx2-65db79f59d, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: c65da23f-598c-4d16-91b3-6a24112cc54d, UID in object meta: 
deployment.apps "nginx" deleted
apps.sh:360: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx-deployment created
I0624 02:34:17.319307   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-7485945f99 to 3"
I0624 02:34:17.332876   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx-deployment-7485945f99" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7485945f99-q59jj"
I0624 02:34:17.345204   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx-deployment-7485945f99" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-7485945f99-fg5fs"
... skipping 7 lines ...
(Bapps.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0624 02:34:18.188209   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-d694cbfb7 to 1"
I0624 02:34:18.210631   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx-deployment-d694cbfb7" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: nginx-deployment-d694cbfb7-h4j6r"
apps.sh:373: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:374: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:379: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:380: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:383: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:384: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
... skipping 48 lines ...
I0624 02:34:22.343171   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-686d795775 to 0"
I0624 02:34:22.361305   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set nginx-deployment-95ff6c788 to 1"
deployment.apps/nginx-deployment env updated
deployment.apps/nginx-deployment env updated
I0624 02:34:22.566776   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx-deployment-686d795775" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: nginx-deployment-686d795775-zpjml"
Successful
message:error: standard input cannot be used for multiple arguments
has:standard input cannot be used for multiple arguments
I0624 02:34:22.723300   57859 event.go:294] "Event occurred" object="namespace-1656038045-4833/nginx-deployment" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set nginx-deployment-8546b9fd9c to 0"
deployment.apps "nginx-deployment" deleted
E0624 02:34:22.851078   57859 replica_set.go:536] sync "namespace-1656038045-4833/nginx-deployment-7485945f99" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-7485945f99": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1656038045-4833/nginx-deployment-7485945f99, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: a14c40aa-3cfa-4a80-b1be-53d0aa69e7f6, UID in object meta: 
configmap "test-set-env-config" deleted
E0624 02:34:22.936210   57859 replica_set.go:536] sync "namespace-1656038045-4833/nginx-deployment-95ff6c788" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-95ff6c788": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1656038045-4833/nginx-deployment-95ff6c788, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: d50daa7e-e9c5-461e-8d5d-4197edadca7a, UID in object meta: 
secret "test-set-env-secret" deleted
+++ exit code: 0
Recording: run_rs_tests
Running command: run_rs_tests

+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
E0624 02:34:23.036613   57859 replica_set.go:536] sync "namespace-1656038045-4833/nginx-deployment-686d795775" failed with replicasets.apps "nginx-deployment-686d795775" not found
+++ [0624 02:34:23] Creating namespace namespace-1656038063-18473
E0624 02:34:23.085297   57859 replica_set.go:536] sync "namespace-1656038045-4833/nginx-deployment-5d6cfd855d" failed with replicasets.apps "nginx-deployment-5d6cfd855d" not found
namespace/namespace-1656038063-18473 created
Context "test" modified.
+++ [0624 02:34:23] Testing kubectl(v1:replicasets)
E0624 02:34:23.184631   57859 replica_set.go:536] sync "namespace-1656038045-4833/nginx-deployment-8546b9fd9c" failed with replicasets.apps "nginx-deployment-8546b9fd9c" not found
apps.sh:553: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
+++ [0624 02:34:23] Deleting rs
I0624 02:34:23.435144   57859 event.go:294] "Event occurred" object="namespace-1656038063-18473/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-25f2k"
I0624 02:34:23.445888   57859 event.go:294] "Event occurred" object="namespace-1656038063-18473/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-ksnqk"
I0624 02:34:23.445927   57859 event.go:294] "Event occurred" object="namespace-1656038063-18473/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-xsg4d"
replicaset.apps "frontend" deleted
E0624 02:34:23.634535   57859 replica_set.go:536] sync "namespace-1656038063-18473/frontend" failed with replicasets.apps "frontend" not found
apps.sh:559: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:563: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0624 02:34:23.950649   57859 event.go:294] "Event occurred" object="namespace-1656038063-18473/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-8g7s7"
I0624 02:34:23.965114   57859 event.go:294] "Event occurred" object="namespace-1656038063-18473/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-lb9gt"
I0624 02:34:23.965165   57859 event.go:294] "Event occurred" object="namespace-1656038063-18473/frontend" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: frontend-pclrs"
apps.sh:567: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0624 02:34:24] Deleting rs
replicaset.apps "frontend" deleted
E0624 02:34:24.138593   57859 replica_set.go:536] sync "namespace-1656038063-18473/frontend" failed with Operation cannot be fulfilled on replicasets.apps "frontend": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1656038063-18473/frontend, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: e40d573e-5747-4601-9366-0d4e5d7d5f63, UID in object meta: 
apps.sh:571: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:573: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-8g7s7" deleted
pod "frontend-lb9gt" deleted
pod "frontend-pclrs" deleted
apps.sh:576: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
Namespace:    namespace-1656038063-18473
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1656038063-18473
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1656038063-18473
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1656038063-18473
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1656038063-18473
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 9 lines ...
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-g6tqv
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-nm4k6
  Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-sjl69
(BW0624 02:34:25.556633   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:34:25.556670   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:         frontend
Namespace:    namespace-1656038063-18473
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1656038063-18473
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1656038063-18473
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 225 lines ...
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:716: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{(index .spec.metrics 0).resource.target.averageUtilization}}: 2 3 80
(BSuccessful
message:kubectl-autoscale
has:kubectl-autoscale
horizontalpodautoscaler.autoscaling "frontend" deleted
error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 38 lines ...
namespace/namespace-1656038075-18702 created
Context "test" modified.
+++ [0624 02:34:35] Testing kubectl(v1:statefulsets, v1:controllerrevisions)
apps.sh:456: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: 
(BFlag --record has been deprecated, --record will be removed in the future
statefulset.apps/nginx created
W0624 02:34:35.834349   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:34:35.834386   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:460: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true"},"labels":{"app":"nginx-statefulset"},"name":"nginx","namespace":"namespace-1656038075-18702"},"spec":{"replicas":0,"selector":{"matchLabels":{"app":"nginx-statefulset"}},"serviceName":"nginx","template":{"metadata":{"labels":{"app":"nginx-statefulset"}},"spec":{"containers":[{"command":["sh","-c","while true; do sleep 1; done"],"image":"k8s.gcr.io/nginx-slim:0.7","name":"nginx","ports":[{"containerPort":80,"name":"web"}]}],"terminationGracePeriodSeconds":5}},"updateStrategy":{"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-statefulset.yaml --record=true --server=https://127.0.0.1:6443 --insecure-skip-tls-verify=true --match-server-version=true]:
(Bstatefulset.apps/nginx skipped rollback (current template already matches revision 1)
apps.sh:463: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:464: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BFlag --record has been deprecated, --record will be removed in the future
... skipping 24 lines ...
(Bapps.sh:475: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:476: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps/nginx rolled back
apps.sh:479: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:480: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:484: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:485: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:488: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(Bapps.sh:489: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 29 lines ...
namespace/namespace-1656038079-7966 created
Context "test" modified.
+++ [0624 02:34:39] Testing kubectl(v1:multiple resources)
Testing with file hack/testdata/multi-resource-yaml.yaml and replace with file hack/testdata/multi-resource-yaml-modify.yaml
generic-resources.sh:63: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bgeneric-resources.sh:64: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0624 02:34:39.650483   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:34:39.650529   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0624 02:34:39.678621   54309 alloc.go:329] "allocated clusterIPs" service="namespace-1656038079-7966/mock" clusterIPs=map[IPv4:10.0.0.30]
service/mock created
replicationcontroller/mock created
I0624 02:34:39.733437   57859 event.go:294] "Event occurred" object="namespace-1656038079-7966/mock" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-tvvcp"
generic-resources.sh:72: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: mock:
(Bgeneric-resources.sh:80: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock:
... skipping 22 lines ...
Name:         mock
Namespace:    namespace-1656038079-7966
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 61 lines ...
Name:         mock
Namespace:    namespace-1656038079-7966
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 13 lines ...
I0624 02:34:43.027550   57859 event.go:294] "Event occurred" object="namespace-1656038079-7966/mock" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: mock-l7tbk"
generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced
(Bgeneric-resources.sh:102: Successful get rc mock {{.metadata.labels.status}}: replaced
(Bservice/mock edited
replicationcontroller/mock edited
generic-resources.sh:114: Successful get services mock {{.metadata.labels.status}}: edited
(BW0624 02:34:43.618034   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:34:43.618066   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:120: Successful get rc mock {{.metadata.labels.status}}: edited
(Bservice/mock labeled
replicationcontroller/mock labeled
generic-resources.sh:134: Successful get services mock {{.metadata.labels.labeled}}: true
(Bgeneric-resources.sh:140: Successful get rc mock {{.metadata.labels.labeled}}: true
(Bservice/mock annotated
... skipping 36 lines ...
Name:         mock
Namespace:    namespace-1656038079-7966
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 42 lines ...
Namespace:    namespace-1656038079-7966
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1656038079-7966
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:3.6
    Port:         9949/TCP
... skipping 115 lines ...
+++ [0624 02:34:53] Creating namespace namespace-1656038093-21508
namespace/namespace-1656038093-21508 created
Context "test" modified.
+++ [0624 02:34:53] Testing persistent volumes
storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0624 02:34:53.506195   57859 pv_protection_controller.go:114] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
E0624 02:34:53.528205   57859 pv_protection_controller.go:114] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(Bpersistentvolume "pv0001" deleted
persistentvolume/pv0002 created
E0624 02:34:53.940738   57859 pv_protection_controller.go:114] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
(Bpersistentvolume "pv0002" deleted
persistentvolume/pv0003 created
E0624 02:34:54.280734   57859 pv_protection_controller.go:114] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
(Bquery for persistentvolumes had limit param
query for events had limit param
query for persistentvolumes had user-specified limit param
Successful describe persistentvolumes verbose logs:
I0624 02:34:54.407207   87483 loader.go:372] Config loaded from file:  /tmp/tmp.Vv3xOocu4r/.kube/config
I0624 02:34:54.412928   87483 round_trippers.go:553] GET https://127.0.0.1:6443/version?timeout=32s 200 OK in 5 milliseconds
I0624 02:34:54.457818   87483 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes?limit=500 200 OK in 5 milliseconds
I0624 02:34:54.460966   87483 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/persistentvolumes/pv0003 200 OK in 1 milliseconds
I0624 02:34:54.473550   87483 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/events?fieldSelector=involvedObject.name%3Dpv0003%2CinvolvedObject.namespace%3D%2CinvolvedObject.kind%3DPersistentVolume%2CinvolvedObject.uid%3D8b790959-14cc-468a-91ac-e81138d30396&limit=500 200 OK in 11 milliseconds
(Bpersistentvolume "pv0003" deleted
storage.sh:44: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0624 02:34:54.944039   57859 pv_protection_controller.go:114] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
E0624 02:34:54.964166   57859 pv_protection_controller.go:114] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:47: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
persistentvolume "pv0001" deleted
has:warning: deleting cluster-scoped resources
Successful
... skipping 88 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 24 Jun 2022 02:29:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 24 Jun 2022 02:29:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 32 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 24 Jun 2022 02:29:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 24 Jun 2022 02:29:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 39 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 24 Jun 2022 02:29:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 24 Jun 2022 02:29:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 31 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 24 Jun 2022 02:29:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 30 lines ...
Labels:             <none>
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    save-managers: true
CreationTimestamp:  Fri, 24 Jun 2022 02:29:12 +0000
Taints:             node.kubernetes.io/unreachable:NoSchedule
Unschedulable:      false
Lease:              Failed to get lease: leases.coordination.k8s.io "127.0.0.1" not found
Conditions:
  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason                   Message
  ----             ------    -----------------                 ------------------                ------                   -------
  Ready            Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  MemoryPressure   Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
  DiskPressure     Unknown   Fri, 24 Jun 2022 02:29:12 +0000   Fri, 24 Jun 2022 02:30:16 +0000   NodeStatusNeverUpdated   Kubelet never posted node status.
... skipping 149 lines ...
yes
has:the server doesn't have a resource type
Successful
message:yes
has:yes
Successful
message:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
Successful
message:yes
0
has:0
... skipping 59 lines ...
		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
legacy-script.sh:855: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
(Blegacy-script.sh:856: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
(Blegacy-script.sh:857: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
(Blegacy-script.sh:858: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
(BSuccessful
message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
has:only rbac.authorization.k8s.io/v1 is supported
rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
role.rbac.authorization.k8s.io "testing-R" deleted
warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
... skipping 24 lines ...
discovery.sh:91: Successful get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}: cassandra:cassandra:cassandra:cassandra:
(BI0624 02:35:07.074298   57859 event.go:294] "Event occurred" object="namespace-1656038106-3061/cassandra" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-q64xx"
pod "cassandra-5zcq4" deleted
pod "cassandra-8wrkn" deleted
replicationcontroller "cassandra" deleted
I0624 02:35:07.166239   57859 event.go:294] "Event occurred" object="namespace-1656038106-3061/cassandra" kind="ReplicationController" apiVersion="v1" type="Normal" reason="SuccessfulCreate" message="Created pod: cassandra-4jz6m"
E0624 02:35:07.204966   57859 replica_set.go:536] sync "namespace-1656038106-3061/cassandra" failed with replicationcontrollers "cassandra" not found
service "cassandra" deleted
+++ exit code: 0
Recording: run_kubectl_explain_tests
Running command: run_kubectl_explain_tests

+++ Running case: test-cmd.run_kubectl_explain_tests 
... skipping 172 lines ...
has:includeObject=Object
get.sh:279: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
get.sh:283: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bget.sh:288: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0624 02:35:09.348420   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:35:09.348461   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/sorted-pod1 created
get.sh:292: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: sorted-pod1:
(Bpod/sorted-pod2 created
get.sh:296: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: sorted-pod1:sorted-pod2:
(Bpod/sorted-pod3 created
get.sh:300: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: sorted-pod1:sorted-pod2:sorted-pod3:
... skipping 191 lines ...
namespace-1656038093-21508   default   0         19s
namespace-1656038095-22126   default   0         17s
namespace-1656038106-3061    default   0         6s
some-other-random            default   0         7s
has:all-ns-test-2
namespace "all-ns-test-1" deleted
W0624 02:35:14.521832   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:35:14.521866   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "all-ns-test-2" deleted
W0624 02:35:21.678010   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:35:21.678052   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0624 02:35:22.481211   57859 namespace_controller.go:185] Namespace has been deleted all-ns-test-1
get.sh:392: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
get.sh:396: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bget.sh:400: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1:
... skipping 10 lines ...
message:Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
No resources found
has:PodSecurityPolicy is deprecated
Successful
message:Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
No resources found
error: 1 warning received
has:PodSecurityPolicy is deprecated
Successful
message:Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
No resources found
error: 1 warning received
has:error: 1 warning received
Recording: run_template_output_tests
Running command: run_template_output_tests

+++ Running case: test-cmd.run_template_output_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_template_output_tests
... skipping 176 lines ...
Successful
message:foo:
has:foo:
Successful
message:foo:
has:foo:
W0624 02:35:27.672528   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:35:27.672562   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:foo:
has:foo:
I0624 02:35:27.738196   57859 namespace_controller.go:185] Namespace has been deleted all-ns-test-2
Successful
message:foo:
... skipping 366 lines ...
node/127.0.0.1 cordoned (server dry run)
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: namespace-1656038132-15019/test-pod-1
evicting pod namespace-1656038132-15019/test-pod-1 (server dry run)
node/127.0.0.1 drained (server dry run)
node-management.sh:140: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2,
(BWARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: namespace-1656038132-15019/test-pod-1
W0624 02:35:59.637511   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:35:59.637541   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:node/127.0.0.1 cordoned
evicting pod namespace-1656038132-15019/test-pod-1
pod "test-pod-1" has DeletionTimestamp older than 1 seconds, skipping
node/127.0.0.1 drained
has:evicting pod .*/test-pod-1
node-management.sh:145: Successful get pods/test-pod-2 {{.metadata.deletionTimestamp}}: <no value>
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "test-pod-1" force deleted
W0624 02:36:07.190147   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:36:07.190187   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "test-pod-2" force deleted
pod/test-pod-1 created
pod/test-pod-2 created
node/127.0.0.1 uncordoned
node-management.sh:151: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
... skipping 5 lines ...
message:node/127.0.0.1 already uncordoned (server dry run)
has:already uncordoned
node-management.sh:161: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode/127.0.0.1 labeled
node-management.sh:166: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(BSuccessful
message:error: cannot specify both a node name and a --selector option
See 'kubectl drain -h' for help and examples
has:cannot specify both a node name
node-management.sh:172: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(Bnode-management.sh:174: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode-management.sh:176: Successful get pods {{range .items}}{{.metadata.name}},{{end}}: test-pod-1,test-pod-2,
(BSuccessful
... skipping 78 lines ...
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: namespace-1656038132-15019/test-pod-1, namespace-1656038132-15019/test-pod-2
evicting pod namespace-1656038132-15019/test-pod-1 (dry run)
evicting pod namespace-1656038132-15019/test-pod-2 (dry run)
node/127.0.0.1 drained (dry run)
has:/v1/pods?fieldSelector=spec.nodeName%3D127.0.0.1&limit=500 200 OK
Successful
message:error: USAGE: cordon NODE [flags]
See 'kubectl cordon -h' for help and examples
has:error\: USAGE\: cordon NODE
node/127.0.0.1 already uncordoned
Successful
message:error: You must provide one or more resources by argument or filename.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
   '<resource> <name>'
   '<resource>'
has:must provide one or more resources
... skipping 18 lines ...
+++ [0624 02:36:09] Testing kubectl plugins
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/version/kubectl-version
  - warning: kubectl-version overwrites existing command: "kubectl version"
error: one plugin warning was found
has:kubectl-version overwrites existing command: "kubectl version"
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
  - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
error: one plugin warning was found
has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
has:plugins are available
Successful
message:Unable to read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
error: unable to find any kubectl plugins in your PATH
has:unable to find any kubectl plugins in your PATH
Successful
message:I am plugin foo
has:plugin foo
Successful
message:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1
... skipping 10 lines ...

+++ Running case: test-cmd.run_impersonation_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_impersonation_tests
+++ [0624 02:36:09] Testing impersonation
Successful
message:error: requesting uid, groups or user-extra for test-admin without impersonating a user
has:without impersonating a user
Successful
message:error: requesting uid, groups or user-extra for test-admin without impersonating a user
has:without impersonating a user
certificatesigningrequest.certificates.k8s.io/foo created
authorization.sh:60: Successful get csr/foo {{.spec.username}}: user1
(Bauthorization.sh:61: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificatesigningrequest.certificates.k8s.io/foo created
... skipping 19 lines ...
I0624 02:36:11.756175   57859 event.go:294] "Event occurred" object="namespace-1656038171-17517/test-1" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-1-68fd6f9fb to 1"
I0624 02:36:11.775083   57859 event.go:294] "Event occurred" object="namespace-1656038171-17517/test-1-68fd6f9fb" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-1-68fd6f9fb-v6qlr"
deployment.apps/test-2 created
I0624 02:36:11.849567   57859 event.go:294] "Event occurred" object="namespace-1656038171-17517/test-2" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set test-2-79d95cd99 to 1"
I0624 02:36:11.863534   57859 event.go:294] "Event occurred" object="namespace-1656038171-17517/test-2-79d95cd99" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: test-2-79d95cd99-hv7kp"
wait.sh:36: Successful get deployments {{range .items}}{{.metadata.name}},{{end}}: test-1,test-2,
(BW0624 02:36:12.090405   57859 reflector.go:324] k8s.io/client-go/metadata/metadatainformer/informer.go:90: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0624 02:36:12.090444   57859 reflector.go:138] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps "test-1" deleted
deployment.apps "test-2" deleted
Successful
message:deployment.apps/test-1 condition met
deployment.apps/test-2 condition met
has:test-1 condition met
... skipping 82 lines ...
I0624 02:36:17.771423   54309 controller.go:89] Shutting down OpenAPI AggregationController
I0624 02:36:17.771456   54309 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::hack/testdata/ca.crt"
I0624 02:36:17.771474   54309 dynamic_cafile_content.go:170] "Shutting down controller" name="client-ca-bundle::hack/testdata/ca.crt"
I0624 02:36:17.771486   54309 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
I0624 02:36:17.771545   54309 dynamic_serving_content.go:145] "Shutting down controller" name="serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key"
I0624 02:36:17.771568   54309 secure_serving.go:311] Stopped listening on 127.0.0.1:6443
W0624 02:36:17.772137   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.772214   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.772344   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.772402   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.772415   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.772528   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.772595   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.772650   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.772759   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.772985   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.772995   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.773057   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.773106   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.773138   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.773150   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.773316   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.773328   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.773337   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.773337   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.773510   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.773524   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.773533   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.773553   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.774361   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.774545   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
{"level":"warn","ts":"2022-06-24T02:36:17.774Z","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000342c40/#initially=[http://127.0.0.1:2379]","attempt":0,"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
W0624 02:36:17.774738   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.774752   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.774755   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.774809   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.774829   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.774983   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775009   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775082   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775139   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775169   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775292   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775296   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775325   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775457   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775470   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775557   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775632   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775654   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775699   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775698   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775714   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775782   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775809   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775875   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775909   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.775910   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.776962   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.776988   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.776997   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.776963   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.777069   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.777164   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.777218   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.777369   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.777384   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.777456   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.777473   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.777546   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:17.777575   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
junit report dir: /tmp/junit-results
+++ [0624 02:36:17] Clean up complete
+ make test-integration
W0624 02:36:18.772965   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.772985   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773000   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773000   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.772965   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773073   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773111   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773119   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773121   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773119   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773147   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773248   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773262   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773281   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773620   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773637   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773658   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773620   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773702   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773716   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773743   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773746   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.773811   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775141   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775183   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775197   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775261   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775259   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775282   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775296   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775297   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775344   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775372   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775417   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775421   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775441   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775463   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775557   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775626   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775679   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775732   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775799   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775807   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775854   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775913   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775924   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775933   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.775988   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.776045   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.776078   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.776084   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.777344   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.777354   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.777370   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.777393   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.777398   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.777417   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.777432   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.777598   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.777634   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.777700   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.777715   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.777872   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:18.777899   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.067095   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.069404   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.076820   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.094377   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.105983   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.111325   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.123242   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.126540   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.144896   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.147319   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.160962   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.160962   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.194730   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.203087   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.236677   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.248584   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.249807   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.251003   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.264468   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.271799   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.274042   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.278425   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.330076   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.332373   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.346804   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.354421   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.354421   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.355712   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.361039   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.364299   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.366617   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.371093   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.380421   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.381591   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.382795   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.448171   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.453658   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.453771   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.464122   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.465324   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.467603   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.467730   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.474024   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.491759   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.507446   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.509823   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.509987   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.509987   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.540396   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.549852   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.582672   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.589414   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.602871   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.612578   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.627080   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.627136   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.637494   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.643903   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.646162   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.649485   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.673259   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.673526   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.678126   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0624 02:36:20.687719   54309 clientconn.go:1331] [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
+++ [0624 02:36:21] Checking etcd is on PATH
/home/prow/go/src/k8s.io/kubernetes/third_party/etcd/etcd
+++ [0624 02:36:21] Starting etcd instance
etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.Z3hVcP7Qz0 --listen-client-urls http://127.0.0.1:2379 --log-level=debug > "/logs/artifacts/etcd.9a4c0680-f363-11ec-a575-76f699ad3721.root.log.DEBUG.20220624-023621.94714" 2>/dev/null
Waiting for etcd to come up.
+++ [0624 02:36:22] On try 2, etcd: : {"health":"true","reason":""}
{"header":{"cluster_id":"14841639068965178418","member_id":"10276657743932975437","revision":"2","raft_term":"2"}}Periodically scraping etcd to /logs/artifacts/etcd-scrapes .
+++ [0624 02:36:22] Running integration test cases
E0624 02:36:23.042008   54309 controller.go:189] Unable to remove endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/10.33.27.3, ResourceVersion: 0, AdditionalErrorMsg: 
+++ [0624 02:36:27] Running tests without code coverage 
{"Time":"2022-06-24T02:39:10.064019243Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/tracing","Output":"ok  \tk8s.io/kubernetes/test/integration/apiserver/tracing\t8.149s\n"}
{"Time":"2022-06-24T02:39:11.811649058Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/podlogs","Output":"ok  \tk8s.io/kubernetes/test/integration/apiserver/podlogs\t11.440s\n"}
{"Time":"2022-06-24T02:39:32.754103603Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestSelfSubjectAccessReview","Output":"rc/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:236 +0xf1\\nnet/http.Error({0x5568f58, 0xc009307380}, {0xc00931a1e0, 0x60}, 0x2)\\n\\t/usr/local/go/src/net/http/server.go:2059 +0x198\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.InternalError({0x5568f58, 0xc009307380}, 0xc0093074a0, {0x54722e0, 0xc00926ec18})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/errors.go:75 +0xea\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1({0x5568f58, 0xc009307380}, 0xc00931c500)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:69 +0x39f\\nnet/http.HandlerFunc.ServeHTTP(0x8241a78, {0x5568f58, 0xc009307380}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/"}
{"Time":"2022-06-24T02:39:32.754115006Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestSelfSubjectAccessReview","Output":"iority-and-fairness.go:277 +0x118\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle.func2()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go:191 +0x1eb\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset.(*request).Finish.func1(0xc008a8fb30, 0xc004dbebd0, 0x40f2e7)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go:357 +0x65\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset.(*request).Finish(0xc004a42660, 0x9)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go:358 +0x45\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle(0xc001dd6dc0, {0x556e7d8, 0xc0093074a0}, {0xc009258"}
{"Time":"2022-06-24T02:39:32.754131034Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestSelfSubjectAccessReview","Output":"al/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x5568f58, 0xc009307380}, 0xc00931c500)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0x4be9740, {0x5568f58, 0xc009307380}, 0xc00165d3b8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1({0x5568f58, 0xc009307380}, 0xc00931c500)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x21c\\nnet/http.HandlerFunc.ServeHTTP(0x8241a78, {0x5568f58, 0xc009307380}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x5568f58, 0xc009307380}, 0xc00931c500)\\n\\t/home/prow/go/src"}
{"Time":"2022-06-24T02:39:32.754139086Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestSelfSubjectAccessReview","Output":"/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc009307440, {0x5568f58, 0xc009307380}, 0x11e2f0a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x5568f58, 0xc009307380}, 0xc00931c500)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0x8241a78, {0x5568f58, 0xc009307380}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x5568f58, 0xc009307380}, 0xc00931c500)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc009307440, {"}
{"Time":"2022-06-24T02:39:43.077393791Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver","Test":"TestListOptions/watchCacheEnabled=true/limit=0_continue=empty_rv=invalid_rvMatch=","Output":"00000)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:236 +0xf1\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0x637a560, 0xc0123238c0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:652 +0x29\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.(*deferredResponseWriter).Write(0xc012d77860, {0xc015b7a000, 0xc0, 0x4b915f})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:229 +0x5bb\\nencoding/json.(*Encoder).Encode(0xc000df2f88, {0x6fe3680, 0xc004f81900})\\n\\t/usr/local/go/src/encoding/json/stream.go:231 +0x1f6\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).doEncode(0x70a6312, {0x7970e50, 0xc004f81900}, {0x7885380, 0"}
{"Time":"2022-06-24T02:39:43.077402354Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver","Test":"TestListOptions/watchCacheEnabled=true/limit=0_continue=empty_rv=invalid_rvMatch=","Output":"xc012d77860})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:233 +0x19a\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Encode(0xc000888140, {0x7970e50, 0xc004f81900}, {0x7885380, 0xc012d77860})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:207 +0xfc\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).doEncode(0xc004f819a0, {0x7970e50, 0xc004f81900}, {0x7885380, 0xc012d77860})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:235 +0xb62\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc004f819a0, {0x7970e50, 0xc004f81900}, {0x7885380, 0xc012d77860})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/loc"}
{"Time":"2022-06-24T02:39:43.077416717Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver","Test":"TestListOptions/watchCacheEnabled=true/limit=0_continue=empty_rv=invalid_rvMatch=","Output":"al/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:191 +0x106\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject({0x70c736a, 0x10}, {0x7f5154630638, 0xc004f819a0}, {0x79871d8, 0xc00745fbc0}, 0xc004768500, 0x1f4, {0x7970e50, 0xc004f81900})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:106 +0x4a9\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated({0x798a298, 0xc00cf7dfc0}, {0x798a538, 0xa954048}, {{0x70a7bfb, 0x72ab830}, {0x70a6312, 0x0}}, {0x79871d8, 0xc00745fbc0}, ...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:274 +0x505\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated({0x788a340, 0xc002e3ae88}, {0x7"}
... skipping 16 lines ...
{"Time":"2022-06-24T02:39:44.268068413Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":"g/runtime/serializer/versioning/versioning.go:191 +0x106\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject({0x4cf761f, 0x10}, {0x7f8e083bd1c0, 0xc00f8fcbe0}, {0x5568f58, 0xc00d1e74a0}, 0xc008346300, 0x1f7, {0x5553468, 0xc00f8fca00})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:106 +0x4a9\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated({0x556bce8, 0xc00c546e40}, {0x556bef8, 0x8241a78}, {{0x0, 0x1d8426d}, {0x4cd7b0a, 0x556e7d8}}, {0x5568f58, 0xc00d1e74a0}, ...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:274 +0x505\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated({0x546d560, 0xc00f8fc820}, {0x556bce8, 0xc00c546e40}, {{0x0, 0x5553e18}, {0x4cd7b0a, 0x4"}
{"Time":"2022-06-24T02:39:44.268101023Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":"1e6960}, 0xc008346200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0xc00c514780, {0x5568f58, 0xc00d1e6960}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1({0x5568f58, 0xc00d1e6960}, 0xc008346200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x425\\nnet/http.HandlerFunc.ServeHTTP(0x8241a78, {0x5568f58, 0xc00d1e6960}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x5568f58, 0xc00d1e6960}, 0xc008346200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFu"}
{"Time":"2022-06-24T02:39:44.268115586Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":"filter.go:191 +0x1eb\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset.(*request).Finish.func1(0xc005bb94a0, 0xc00738ebd0, 0x40f2e7)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go:357 +0x65\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset.(*request).Finish(0xc0043046f0, 0x9)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go:358 +0x45\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle(0xc00c229180, {0x556e7d8, 0xc00d1e6de0}, {0xc0081b3340, {0x556f5d8, 0xc00c47c800}}, 0x1, 0x554fa48, 0xc0024dbcb0, 0xc005e27e50)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go:181 +0x853\\nk8s.io/kubernetes/vendor/k8s.io/ap"}
{"Time":"2022-06-24T02:39:44.268131855Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":"s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0x4ecbd90, {0x5568f58, 0xc00d1e6960}, 0x4d10389)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1({0x5568f58, 0xc00d1e6960}, 0xc008346200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x21c\\nnet/http.HandlerFunc.ServeHTTP(0x8241a78, {0x5568f58, 0xc00d1e6960}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x5568f58, 0xc00d1e6960}, 0xc008346200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc00d1e6c90, {0x5568f58, 0xc00d1e6960}, 0x11e2f0a)\\n\\t/usr/local/go/src/net/http/server.go:20"}
{"Time":"2022-06-24T02:39:44.268139147Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":"47 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x5568f58, 0xc00d1e6960}, 0xc008346200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0x8241a78, {0x5568f58, 0xc00d1e6960}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x5568f58, 0xc00d1e6960}, 0xc008346200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc00d1e6c90, {0x5568f58, 0xc00d1e6960}, 0x11e2f0a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x5568f58, 0xc00d1e6960}, 0xc008346200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_ou"}
{"Time":"2022-06-24T02:39:44.268145912Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":"tput/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0x556e7d8, {0x5568f58, 0xc00d1e6960}, 0x4ec57d8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withAuthentication.func1({0x5568f58, 0xc00d1e6960}, 0xc008346200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authentication.go:80 +0x839\\nnet/http.HandlerFunc.ServeHTTP(0x556e7a0, {0x5568f58, 0xc00d1e6960}, 0x5452838)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x5568f58, 0xc00d1e6960}, 0xc008346000)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:89 +0x46b\\nnet/http.HandlerFunc.ServeHTTP(0x10000c005a288a0, {0x5568f58, 0xc00d1e69"}
{"Time":"2022-06-24T02:39:44.268152866Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAuthModeAlwaysAllow","Output":"60}, 0xc005e263c0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:114 +0x70\\ncreated by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:100 +0x1dd\\n\" addedInfo=\"\\nlogging error output: \\\"{\\\\\\\"kind\\\\\\\":\\\\\\\"Status\\\\\\\",\\\\\\\"apiVersion\\\\\\\":\\\\\\\"v1\\\\\\\",\\\\\\\"metadata\\\\\\\":{},\\\\\\\"status\\\\\\\":\\\\\\\"Failure\\\\\\\",\\\\\\\"message\\\\\\\":\\\\\\\"no endpoints available for service \\\\\\\\\\\\\\\"a\\\\\\\\\\\\\\\"\\\\\\\",\\\\\\\"reason\\\\\\\":\\\\\\\"ServiceUnavailable\\\\\\\",\\\\\\\"code\\\\\\\":503}\\\\n\\\"\\n\"\n"}
{"Time":"2022-06-24T02:39:55.685660585Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAliceNotForbiddenOrUnauthorized","Output":"rnetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:236 +0xf1\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0x402aae0, 0xc012badb80)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:652 +0x29\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.(*deferredResponseWriter).Write(0xc0127c1260, {0xc002d25000, 0xa3, 0x2b61})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:229 +0x5bb\\nencoding/json.(*Encoder).Encode(0xc00d193000, {0x4c18040, 0xc00d2ceb40})\\n\\t/usr/local/go/src/encoding/json/stream.go:231 +0x1f6\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).doEncode(0x4cd7b0a, {0x5553468, 0xc00d2ceb40}, {0x546dd40, 0xc0127c1260})\\n\\t/home/prow/go/src/k8s.io"}
{"Time":"2022-06-24T02:39:55.685670133Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAliceNotForbiddenOrUnauthorized","Output":"/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:233 +0x19a\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Encode(0xc0000d3b80, {0x5553468, 0xc00d2ceb40}, {0x546dd40, 0xc0127c1260})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:207 +0xfc\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).doEncode(0xc00d2cebe0, {0x5553468, 0xc00d2ceb40}, {0x546dd40, 0xc0127c1260})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:235 +0xb62\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc00d2cebe0, {0x5553468, 0xc00d2ceb40}, {0x546dd40, 0xc0127c1260})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io"}
{"Time":"2022-06-24T02:39:55.685678433Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAliceNotForbiddenOrUnauthorized","Output":"/apimachinery/pkg/runtime/serializer/versioning/versioning.go:191 +0x106\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject({0x4cf761f, 0x10}, {0x7f8e083bd1c0, 0xc00d2cebe0}, {0x5568f58, 0xc00a2a6f30}, 0xc0091f8500, 0x1f7, {0x5553468, 0xc00d2ceb40})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:106 +0x4a9\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated({0x556bce8, 0xc00fffcb80}, {0x556bef8, 0x8241a78}, {{0x0, 0x1d8426d}, {0x4cd7b0a, 0x556e7d8}}, {0x5568f58, 0xc00a2a6f30}, ...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:274 +0x505\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated({0x546d560, 0xc00d2ceaa0}, {0x556bce8, 0xc00fffcb80}, {{0x0, 0x5553e18},"}
{"Time":"2022-06-24T02:39:55.685736495Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAliceNotForbiddenOrUnauthorized","Output":"flowcontrol/apf_filter.go:191 +0x1eb\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset.(*request).Finish.func1(0xc008940d20, 0xc00bf2cbd0, 0x40f2e7)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go:357 +0x65\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset.(*request).Finish(0xc005898380, 0xe)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go:358 +0x45\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle(0xc014370a00, {0x556e7d8, 0xc00a2a6ae0}, {0xc0091a6630, {0x556f5d8, 0xc00c059a80}}, 0x1, 0x554fa48, 0xc0024dbcb0, 0xc00a2c6410)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go:181 +0x853\\nk8s.io/kubernetes/"}
{"Time":"2022-06-24T02:39:55.685754156Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAliceNotForbiddenOrUnauthorized","Output":"rnetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0x556e7d8, {0x5568f58, 0xc00a2a6660}, 0x14)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1({0x5568f58, 0xc00a2a6660}, 0xc0091f8400)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:50 +0x21c\\nnet/http.HandlerFunc.ServeHTTP(0x8241a78, {0x5568f58, 0xc00a2a6660}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x5568f58, 0xc00a2a6660}, 0xc0091f8400)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc00a2a6750, {0x5568f58, 0xc00a2a6660}, 0x11e2f0a)\\n\\t/usr/local/go/src/net/http/s"}
{"Time":"2022-06-24T02:39:55.68576223Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestAliceNotForbiddenOrUnauthorized","Output":"erver.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x5568f58, 0xc00a2a6660}, 0xc0091f8400)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0x8241a78, {0x5568f58, 0xc00a2a6660}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x5568f58, 0xc00a2a6660}, 0xc0091f8400)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc00a2a6750, {0x5568f58, 0xc00a2a6660}, 0x11e2f0a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x5568f58, 0xc00a2a6660}, 0xc0091f8400)\\n\\t/home/prow/go/src/k8s.io/kub"}
... skipping 34 lines ...
{"Time":"2022-06-24T02:40:19.294989158Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestUpdateVeryLargeObject","Output":"/handler.go:146 +0x5bb\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithWebhookDuration.func1({0x554cff8, 0xc043ce2c90}, 0xc043cc5900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/webhook_duration.go:31 +0x24e\\nnet/http.HandlerFunc.ServeHTTP(0xc043ce2d50, {0x554cff8, 0xc043ce2c90}, 0xa6528a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x554cff8, 0xc043ce2c90}, 0xc043cc5900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0xc01acfb350, {0x554cff8, 0xc043ce2c90}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1({0x554cff8, 0xc043ce2c90}, 0xc043cc5900)\\n\\t/home/prow/go/src/k8s.io/kub"}
{"Time":"2022-06-24T02:40:19.294997245Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestUpdateVeryLargeObject","Output":"ernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x425\\nnet/http.HandlerFunc.ServeHTTP(0x8201938, {0x554cff8, 0xc043ce2c90}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x554cff8, 0xc043ce2c90}, 0xc043cc5900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc043ce2d50, {0x554cff8, 0xc043ce2c90}, 0xa6528a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x554cff8, 0xc043ce2c90}, 0xc043cc5900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0xc042cb3ac0, {0x554cff8, 0xc043c"}
{"Time":"2022-06-24T02:40:19.295005767Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestUpdateVeryLargeObject","Output":"e2c90}, 0xc016bcdb60)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPriorityAndFairness.func1.9()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go:277 +0x118\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle.func2()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go:191 +0x1eb\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset.(*request).Finish.func1(0xc0427f0690, 0xc029760bd0, 0x40f2e7)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go:357 +0x65\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset.(*request).Finish(0xc01ac88b30, 0x9)\\n\\t/home/prow/go"}
{"Time":"2022-06-24T02:40:19.29501395Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestUpdateVeryLargeObject","Output":"/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/fairqueuing/queueset/queueset.go:358 +0x45\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle(0xc02b565b80, {0x5552878, 0xc043ce2db0}, {0xc04257d970, {0x5553678, 0xc042cb3a00}}, 0x1, 0x5533b08, 0xc008fdcfc0, 0xc0380e74f0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go:181 +0x853\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPriorityAndFairness.func1({0x554cff8, 0xc043ce2c90}, 0xc043cc5900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go:280 +0xdda\\nnet/http.HandlerFunc.ServeHTTP(0x8201938, {0x554cff8, 0xc043ce2c90}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({"}
{"Time":"2022-06-24T02:40:19.295031834Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestUpdateVeryLargeObject","Output":"go:50 +0x21c\\nnet/http.HandlerFunc.ServeHTTP(0x8201938, {0x554cff8, 0xc043ce2c90}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x554cff8, 0xc043ce2c90}, 0xc043cc5900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc043ce2d50, {0x554cff8, 0xc043ce2c90}, 0xa6528a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x554cff8, 0xc043ce2c90}, 0xc043cc5900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0x8201938, {0x554cff8, 0xc043ce2c90}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg"}
{"Time":"2022-06-24T02:40:19.295048519Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestUpdateVeryLargeObject","Output":"iserver/pkg/endpoints/filters/authentication.go:80 +0x839\\nnet/http.HandlerFunc.ServeHTTP(0x5552840, {0x554cff8, 0xc043ce2c90}, 0x5436fd8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x554cff8, 0xc043ce2c90}, 0xc043cc5700)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:89 +0x46b\\nnet/http.HandlerFunc.ServeHTTP(0x100000000000001, {0x554cff8, 0xc043ce2c90}, 0xc0380e6be0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:114 +0x70\\ncreated by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8"}
{"Time":"2022-06-24T02:40:24.96071759Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestImpersonateWithUID/impersonation_with_only_UID_fails","Output":"erver/pkg/server/filters/timeout.go:236 +0xf1\\nnet/http.Error({0x556bf58, 0xc004c3a460}, {0xc021675800, 0x78}, 0x2)\\n\\t/usr/local/go/src/net/http/server.go:2059 +0x198\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.InternalError({0x556bf58, 0xc004c3a460}, 0x1, {0x54647c0, 0xc01a7ba670})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/errors.go:75 +0xea\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithImpersonation.func1({0x556bf58, 0xc004c3a460}, 0xc018d76800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/impersonation.go:46 +0x345\\nnet/http.HandlerFunc.ServeHTTP(0x8241a78, {0x556bf58, 0xc004c3a460}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x556bf58, 0xc004c3a460}, 0xc018d76800)\\n\\t"}
{"Time":"2022-06-24T02:40:24.960725593Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/auth","Test":"TestImpersonateWithUID/impersonation_with_only_UID_fails","Output":"/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc035a8f948, {0x556bf58, 0xc004c3a460}, 0x11e2f0a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x556bf58, 0xc004c3a460}, 0xc018d76800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0x8241a78, {0x556bf58, 0xc004c3a460}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x556bf58, 0xc004c3a460}, 0xc018d76800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTT"}
{"Time":"2022-06-24T02:40:29.401678074Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestPatchVeryLargeObject","Output":"0xc000342000)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:236 +0xf1\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0x40127a0, 0xc0323c4580)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:652 +0x29\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.(*deferredResponseWriter).Write(0xc0237bf2c0, {0xc078b06000, 0x7d, 0x111e93})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:229 +0x5bb\\nencoding/json.(*Encoder).Encode(0xc0094368a0, {0x4bfae40, 0xc027e2c960})\\n\\t/usr/local/go/src/encoding/json/stream.go:231 +0x1f6\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).doEncode(0x4cba910, {0x5537528, 0xc027e2c960}, {0x545"}
{"Time":"2022-06-24T02:40:29.401688947Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestPatchVeryLargeObject","Output":"2420, 0xc0237bf2c0})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:233 +0x19a\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Encode(0xc000367450, {0x5537528, 0xc027e2c960}, {0x5452420, 0xc0237bf2c0})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:207 +0xfc\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).doEncode(0xc027e2ca00, {0x5537528, 0xc027e2c960}, {0x5452420, 0xc0237bf2c0})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:235 +0xb62\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc027e2ca00, {0x5537528, 0xc027e2c960}, {0x5452420, 0xc0237bf2c0})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_out"}
{"Time":"2022-06-24T02:40:29.401697587Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestPatchVeryLargeObject","Output":"put/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:191 +0x106\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject({0x4cda42e, 0x10}, {0x7fad2059bb28, 0xc027e2ca00}, {0x554cff8, 0xc04e8eb4a0}, 0xc0289a5700, 0x1f4, {0x5537528, 0xc027e2c960})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:106 +0x4a9\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated({0x554fd58, 0xc04bd00800}, {0x554ff68, 0x8201938}, {{0x0, 0x0}, {0x4cba910, 0xc032514240}}, {0x554cff8, 0xc04e8eb4a0}, ...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:274 +0x505\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated({0x5455ba0, 0xc017bf6120}, "}
{"Time":"2022-06-24T02:40:29.401724302Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestPatchVeryLargeObject","Output":"ernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.InstrumentRouteFunc.func1(0xc04e8eb3e0, 0xc02abba460)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:509 +0x223\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc0124ed680, {0x554cff8, 0xc04e8eb140}, 0xc0289a5700)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:288 +0x9d1\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x4cd36a0, 0x5436fd8}, 0xc0124ed680, 0xc03fe1ccb0}, {0x554cff8, 0xc04e8eb140}, 0xc0289a5700)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/serve"}
{"Time":"2022-06-24T02:40:29.401732386Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestPatchVeryLargeObject","Output":"r/handler.go:146 +0x5bb\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithWebhookDuration.func1({0x554cff8, 0xc04e8eb140}, 0xc0289a5600)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/webhook_duration.go:31 +0x24e\\nnet/http.HandlerFunc.ServeHTTP(0xc04e8eb200, {0x554cff8, 0xc04e8eb140}, 0xa6528a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x554cff8, 0xc04e8eb140}, 0xc0289a5600)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0xc03c04b158, {0x554cff8, 0xc04e8eb140}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithAuthorization.func1({0x554cff8, 0xc04e8eb140}, 0xc0289a5600)\\n\\t/home/prow/go/src/k8s.io/ku"}
... skipping 29 lines ...
{"Time":"2022-06-24T02:41:46.118454524Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestApplyDoesNotChangeManagedFieldsViaSubresources","Output":"inery/pkg/runtime/serializer/versioning/versioning.go:191 +0x106\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject({0x4cda42e, 0x10}, {0x7fad2059bb28, 0xc060e230e0}, {0x554cff8, 0xc060f2de60}, 0xc06108e900, 0x1f8, {0x5537528, 0xc060e23040})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:106 +0x4a9\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated({0x554fd58, 0xc0558d0780}, {0x554ff68, 0x8201938}, {{0x0, 0xc076630a50}, {0x4cba910, 0x5451c60}}, {0x554cff8, 0xc060f2de60}, ...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:274 +0x505\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated({0x5451c60, 0xc060e22fa0}, {0x554fd58, 0xc0558d0780}, {{0x0, 0x0}, {0x4cba910"}
{"Time":"2022-06-24T02:41:46.118471633Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestApplyDoesNotChangeManagedFieldsViaSubresources","Output":"/metrics.InstrumentRouteFunc.func1(0xc060f2dda0, 0xc07587caf0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:509 +0x223\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc015e7fb00, {0x554cff8, 0xc060f2db60}, 0xc06108e900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:288 +0x9d1\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x4cd36a0, 0x5436fd8}, 0xc015e7fb00, 0xc04aba1180}, {0x554cff8, 0xc060f2db60}, 0xc06108e900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x5bb\\nk8s.io/kubernetes/ve"}
{"Time":"2022-06-24T02:41:46.118490793Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestApplyDoesNotChangeManagedFieldsViaSubresources","Output":"es/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x425\\nnet/http.HandlerFunc.ServeHTTP(0x8201938, {0x554cff8, 0xc060f2db60}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x554cff8, 0xc060f2db60}, 0xc06108e800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc060f2dc20, {0x554cff8, 0xc060f2db60}, 0xa6528a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x554cff8, 0xc060f2db60}, 0xc06108e800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0xc077617f00, {0x554cff8, 0xc060f2db60}, 0xc00a637920)\\n\\t/usr/local/go/src/n"}
{"Time":"2022-06-24T02:41:46.118499758Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestApplyDoesNotChangeManagedFieldsViaSubresources","Output":"et/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPriorityAndFairness.func1.9()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go:277 +0x118\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle.func2()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go:191 +0x1eb\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.immediateRequest.Finish(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go:793\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle(0xc054eeeb40, {0x5552878, 0xc060f2dcb0}, {0xc060ca7810, {0x5553678, 0xc0559edb00}}, 0x1, 0x5533b08, 0xc008fdcfc0, 0xc06ebc3630)\\n\\t/home/prow/go/src/k8s.io/kubernetes"}
{"Time":"2022-06-24T02:41:46.118508093Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestApplyDoesNotChangeManagedFieldsViaSubresources","Output":"/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go:181 +0x853\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPriorityAndFairness.func1({0x554cff8, 0xc060f2db60}, 0xc06108e800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go:280 +0xdda\\nnet/http.HandlerFunc.ServeHTTP(0x8201938, {0x554cff8, 0xc060f2db60}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x554cff8, 0xc060f2db60}, 0xc06108e800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc060f2dc20, {0x554cff8, 0xc060f2db60}, 0xa6528a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.track"}
{"Time":"2022-06-24T02:41:46.118523297Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestApplyDoesNotChangeManagedFieldsViaSubresources","Output":"atency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc060f2dc20, {0x554cff8, 0xc060f2db60}, 0xa6528a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x554cff8, 0xc060f2db60}, 0xc06108e800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0x8201938, {0x554cff8, 0xc060f2db60}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x554cff8, 0xc060f2db60}, 0xc06108e800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc060f2dc20, {0x554cff8, 0xc060f2db60}, 0xa6528a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.i"}
{"Time":"2022-06-24T02:41:46.11854212Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestApplyDoesNotChangeManagedFieldsViaSubresources","Output":"8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:89 +0x46b\\nnet/http.HandlerFunc.ServeHTTP(0x100000000000001, {0x554cff8, 0xc060f2db60}, 0xc075cea5a0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:114 +0x70\\ncreated by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:100 +0x1dd\\n\" addedInfo=\"\\nlogging error output: \\\"{\\\\\\\"kind\\\\\\\":\\\\\\\"Status\\\\\\\",\\\\\\\"apiVersion\\\\\\\":\\\\\\\"v1\\\\\\\",\\\\\\\"metadata\\\\\\\":{},\\\\\\\"status\\\\\\\":\\\\\\\"Failure\\\\\\\",\\\\\\\"message\\\\\\\":\\\\\\\"Timeout: request did not complete within requested timeout - context canceled\\\\\\\",\\\\\\\"reason\\\\\\\":\\\\\\\"Timeout\\\\\\\",\\\\\\\"details\\\\\\\":{},\\\\\\\"code\\\\\\\":504}\\\\n\\\"\\n"}
{"Time":"2022-06-24T02:41:57.240829265Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/client","Test":"TestPatch","Output":"        \u0004name\u0012\u0005image*\u0000B\u0000j\u0014/dev/termination-logr\u0006Always\ufffd\u0001\u0000\ufffd\u0001\u0000\ufffd\u0001\u0000\ufffd\u0001\u0004File\u001a\u0006Always \u001e2\u000cClusterFirstB\u0000J\u0000R\u0000X\u0000`\u0000h\u0000r\u0000\ufffd\u0001\u0000\ufffd\u0001\u0000\ufffd\u0001\u0011default-scheduler\ufffd\u00016\n"}
{"Time":"2022-06-24T02:41:57.240840328Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/client","Test":"TestPatch","Output":"        \u001cnode.kubernetes.io/not-ready\u0012\u0006Exists\u001a\u0000\"\tNoExecute(\ufffd\u0002\ufffd\u00018\n"}
{"Time":"2022-06-24T02:41:57.240847829Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/client","Test":"TestPatch","Output":"        \u001enode.kubernetes.io/unreachable\u0012\u0006Exists\u001a\u0000\"\tNoExecute(\ufffd\u0002\ufffd\u0001\u0000\ufffd\u0001\u0000\ufffd\u0001\u0001\ufffd\u0001\u0014PreemptLowerPriority\u001a\u001f\n"}
{"Time":"2022-06-24T02:41:57.272699387Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/client","Test":"TestPatch","Output":"        \u0004name\u0012\u0005image*\u0000B\u0000j\u0014/dev/termination-logr\u0006Always\ufffd\u0001\u0000\ufffd\u0001\u0000\ufffd\u0001\u0000\ufffd\u0001\u0004File\u001a\u0006Always \u001e2\u000cClusterFirstB\u0000J\u0000R\u0000X\u0000`\u0000h\u0000r\u0000\ufffd\u0001\u0000\ufffd\u0001\u0000\ufffd\u0001\u0011default-scheduler\ufffd\u00016\n"}
{"Time":"2022-06-24T02:41:57.272708547Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/client","Test":"TestPatch","Output":"        \u001cnode.kubernetes.io/not-ready\u0012\u0006Exists\u001a\u0000\"\tNoExecute(\ufffd\u0002\ufffd\u00018\n"}
{"Time":"2022-06-24T02:41:57.272714929Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/client","Test":"TestPatch","Output":"        \u001enode.kubernetes.io/unreachable\u0012\u0006Exists\u001a\u0000\"\tNoExecute(\ufffd\u0002\ufffd\u0001\u0000\ufffd\u0001\u0000\ufffd\u0001\u0001\ufffd\u0001\u0014PreemptLowerPriority\u001a\u001f\n"}
... skipping 26 lines ...
{"Time":"2022-06-24T02:42:50.273840631Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestDroppingSubresourceField","Output":"inery/pkg/runtime/serializer/versioning/versioning.go:191 +0x106\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject({0x4cda42e, 0x10}, {0x7fad2059bb28, 0xc0c116c8c0}, {0x554cff8, 0xc0c116ec30}, 0xc0c116b200, 0x1f8, {0x5537528, 0xc0c116c820})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:106 +0x4a9\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated({0x554fd58, 0xc0a5bc3440}, {0x554ff68, 0x8201938}, {{0x0, 0xc0c0f196d0}, {0x4cba910, 0x5451c60}}, {0x554cff8, 0xc0c116ec30}, ...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:274 +0x505\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated({0x5451c60, 0xc0c116c780}, {0x554fd58, 0xc0a5bc3440}, {{0x0, 0x0}, {0x4cba910"}
{"Time":"2022-06-24T02:42:50.273861233Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestDroppingSubresourceField","Output":"/metrics.InstrumentRouteFunc.func1(0xc0c116eb70, 0xc0c110d490)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:509 +0x223\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc0a5bc6870, {0x554cff8, 0xc0c116e930}, 0xc0c116b200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:288 +0x9d1\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x4cd36a0, 0x5436fd8}, 0xc0a5bc6870, 0xc0a5b9c2a0}, {0x554cff8, 0xc0c116e930}, 0xc0c116b200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x5bb\\nk8s.io/kubernetes/ve"}
{"Time":"2022-06-24T02:42:50.273886528Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestDroppingSubresourceField","Output":"es/vendor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x425\\nnet/http.HandlerFunc.ServeHTTP(0x8201938, {0x554cff8, 0xc0c116e930}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x554cff8, 0xc0c116e930}, 0xc0c116b100)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc0c116e9f0, {0x554cff8, 0xc0c116e930}, 0xa6528a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x554cff8, 0xc0c116e930}, 0xc0c116b100)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0xc0c11299c0, {0x554cff8, 0xc0c116e930}, 0xc00a637920)\\n\\t/usr/local/go/src/n"}
{"Time":"2022-06-24T02:42:50.273895161Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestDroppingSubresourceField","Output":"et/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPriorityAndFairness.func1.9()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go:277 +0x118\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle.func2()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go:191 +0x1eb\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.immediateRequest.Finish(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go:793\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle(0xc0a25e5b80, {0x5552878, 0xc0c116ea80}, {0xc0c114c4d0, {0x5553678, 0xc0a5bc27c0}}, 0x1, 0x5533b08, 0xc008fdcfc0, 0xc0c1160aa0)\\n\\t/home/prow/go/src/k8s.io/kubernetes"}
{"Time":"2022-06-24T02:42:50.273902308Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestDroppingSubresourceField","Output":"/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go:181 +0x853\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPriorityAndFairness.func1({0x554cff8, 0xc0c116e930}, 0xc0c116b100)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go:280 +0xdda\\nnet/http.HandlerFunc.ServeHTTP(0x8201938, {0x554cff8, 0xc0c116e930}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x554cff8, 0xc0c116e930}, 0xc0c116b100)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc0c116e9f0, {0x554cff8, 0xc0c116e930}, 0xa6528a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.track"}
{"Time":"2022-06-24T02:42:50.273922448Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestDroppingSubresourceField","Output":"atency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc0c116e9f0, {0x554cff8, 0xc0c116e930}, 0xa6528a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x554cff8, 0xc0c116e930}, 0xc0c116b100)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0x8201938, {0x554cff8, 0xc0c116e930}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x554cff8, 0xc0c116e930}, 0xc0c116b100)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc0c116e9f0, {0x554cff8, 0xc0c116e930}, 0xa6528a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.i"}
{"Time":"2022-06-24T02:42:50.273945793Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestDroppingSubresourceField","Output":"8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:89 +0x46b\\nnet/http.HandlerFunc.ServeHTTP(0x100000000000001, {0x554cff8, 0xc0c116e930}, 0xc0af2db090)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:114 +0x70\\ncreated by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:100 +0x1dd\\n\" addedInfo=\"\\nlogging error output: \\\"{\\\\\\\"kind\\\\\\\":\\\\\\\"Status\\\\\\\",\\\\\\\"apiVersion\\\\\\\":\\\\\\\"v1\\\\\\\",\\\\\\\"metadata\\\\\\\":{},\\\\\\\"status\\\\\\\":\\\\\\\"Failure\\\\\\\",\\\\\\\"message\\\\\\\":\\\\\\\"Timeout: request did not complete within requested timeout - context canceled\\\\\\\",\\\\\\\"reason\\\\\\\":\\\\\\\"Timeout\\\\\\\",\\\\\\\"details\\\\\\\":{},\\\\\\\"code\\\\\\\":504}\\\\n\\\"\\n"}
{"Time":"2022-06-24T02:43:12.406344812Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/daemonset","Test":"TestSimpleDaemonSetLaunchesPods/TestSimpleDaemonSetLaunchesPods_RollingUpdate","Output":"20, 0xc00952fea0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:255 +0xd8\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc00d7b7068, {0x51e4af8, 0xc00d771b30}, 0xdf8475800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:146 +0x2f6\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x51e4af8, 0xc00d771b30}, 0xc00e04ff00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go:100 +0x494\\nnet/http.HandlerFunc.ServeHTTP(0x500878, {0x51e4af8, 0xc00d771b30}, 0xc009b192f0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withWaitGroup.func1({0x51e4af8, 0xc00d771b30}, 0xc00e04ff00)\\n\\t/home/prow/go/src/k8s.io/kuber"}
{"Time":"2022-06-24T02:43:12.406354772Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/daemonset","Test":"TestSimpleDaemonSetLaunchesPods/TestSimpleDaemonSetLaunchesPods_RollingUpdate","Output":"netes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:77 +0x766\\nnet/http.HandlerFunc.ServeHTTP(0x51e9d98, {0x51e4af8, 0xc00d771b30}, 0x50d5a30)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithWarningRecorder.func1({0x51e4af8, 0xc00d771b30}, 0xc00e04fe00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/warning.go:35 +0x2bb\\nnet/http.HandlerFunc.ServeHTTP(0x44ba160, {0x51e4af8, 0xc00d771b30}, 0xd)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1({0x51e4af8, 0xc00d771b30}, 0x50d5a01)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0x126\\nnet/http.HandlerFunc.ServeHTTP(0x51eaa10, {0x51e4af8, 0xc00d771b30}, 0x50d5a30)\\n\\t/usr/local/go/src/"}
{"Time":"2022-06-24T02:43:12.406363061Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/daemonset","Test":"TestSimpleDaemonSetLaunchesPods/TestSimpleDaemonSetLaunchesPods_RollingUpdate","Output":"net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.withLogging.func1({0x51e62c8, 0xc008fcd500}, 0xc00e04fd00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:128 +0x44d\\nnet/http.HandlerFunc.ServeHTTP(0x51e9d98, {0x51e62c8, 0xc008fcd500}, 0x50d5a30)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1({0x51e62c8, 0xc008fcd500}, 0xc00e04fc00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x316\\nnet/http.HandlerFunc.ServeHTTP(0x51e9d98, {0x51e62c8, 0xc008fcd500}, 0xc00d771a70)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestReceivedTimestampWithClock.func1({0x51e62c8, 0xc008fcd500}, 0xc00e04fb00)\\n\\t/home/prow/go/src/k8s.io/kubernet"}
{"Time":"2022-06-24T02:43:12.406372216Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/daemonset","Test":"TestSimpleDaemonSetLaunchesPods/TestSimpleDaemonSetLaunchesPods_RollingUpdate","Output":"es/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_received_time.go:38 +0x27e\\nnet/http.HandlerFunc.ServeHTTP(0x51e9d98, {0x51e62c8, 0xc008fcd500}, 0x50d5a30)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithMuxAndDiscoveryComplete.func1({0x51e62c8, 0xc008fcd500}, 0xc00e04fa00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/mux_discovery_complete.go:52 +0x2c2\\nnet/http.HandlerFunc.ServeHTTP(0x160, {0x51e62c8, 0xc008fcd500}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1({0x51e62c8, 0xc008fcd500}, 0xc00e06c6e0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:74 +0xba\\nnet/http.HandlerFunc.ServeHTTP(0x44ba160, {0x51e62c8, 0xc008fcd500}, 0x8)\\n\\t/usr"}
{"Time":"2022-06-24T02:43:12.406385727Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/daemonset","Test":"TestSimpleDaemonSetLaunchesPods/TestSimpleDaemonSetLaunchesPods_RollingUpdate","Output":"/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withAuditID.func1({0x51e62c8, 0xc008fcd500}, 0xc009837000)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/with_auditid.go:66 +0x40d\\nnet/http.HandlerFunc.ServeHTTP(0x200, {0x51e62c8, 0xc008fcd500}, 0xc009b19ac8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189\\nk8s.io/kubernetes/test/integration/framework.startAPIServerOrDie.func1({0x51e62c8, 0xc008fcd500}, 0x727562)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/framework/controlplane_utils.go:144 +0x45\\nnet/http.HandlerFunc.ServeHTTP(0x0, {0x51e62c8, 0xc008fcd500}, 0x7ffb24adbf18)\\n\\t/usr/local/go/src/net/http/se"}
{"Time":"2022-06-24T02:43:12.406394277Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/daemonset","Test":"TestSimpleDaemonSetLaunchesPods/TestSimpleDaemonSetLaunchesPods_RollingUpdate","Output":"rver.go:2047 +0x2f\\nnet/http.serverHandler.ServeHTTP({0x51d37e8}, {0x51e62c8, 0xc008fcd500}, 0xc009837000)\\n\\t/usr/local/go/src/net/http/server.go:2879 +0x43b\\nnet/http.(*conn).serve(0xc005489a40, {0x51e9d98, 0xc00d8b2840})\\n\\t/usr/local/go/src/net/http/server.go:1930 +0xb08\\ncreated by net/http.(*Server).Serve\\n\\t/usr/local/go/src/net/http/server.go:3034 +0x4e8\\n\" addedInfo=\"\\nlogging error output: \\\"{\\\\\\\"metadata\\\\\\\":{},\\\\\\\"status\\\\\\\":\\\\\\\"Failure\\\\\\\",\\\\\\\"message\\\\\\\":\\\\\\\"Timeout: request did not complete within the allotted timeout\\\\\\\",\\\\\\\"reason\\\\\\\":\\\\\\\"Timeout\\\\\\\",\\\\\\\"details\\\\\\\":{},\\\\\\\"code\\\\\\\":504}\\\\n\\\"\\n\"\n"}
{"Time":"2022-06-24T02:48:07.378059466Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":" +0x25\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).timeout(0xc001be67b0, 0xc00c744be0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:255 +0xd8\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0045a51a0, {0x51099c8, 0xc001be66c0}, 0xdf8475800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:146 +0x2f6\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x51099c8, 0xc001be66c0}, 0xc005c8c500)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go:100 +0x494\\nnet/http.HandlerFunc.ServeHTTP(0x500198, {0x51099c8, 0xc001be66c0}, 0xc0029572f0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/s"}
{"Time":"2022-06-24T02:48:07.37809053Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"/filters/wrap.go:74 +0xba\\nnet/http.HandlerFunc.ServeHTTP(0x43fc000, {0x510b168, 0xc00b9b2460}, 0x8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withAuditID.func1({0x510b168, 0xc00b9b2460}, 0xc00c2c9e00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/with_auditid.go:66 +0x40d\\nnet/http.HandlerFunc.ServeHTTP(0xc005d4cb68, {0x510b168, 0xc00b9b2460}, 0xc005d4cbd8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189\\nk8s.io/kubernetes/test/integration/framework.startAPIServerOrDie.func1({0x510b168, 0xc00b9b2460}, 0x726e82)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/framework/controlplane_utils.go:144 +0x"}
{"Time":"2022-06-24T02:48:07.378124438Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*baseTimeoutWriter).timeout(0xc0044d7950, 0xc0060f3360)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:255 +0xd8\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0045a51a0, {0x51099c8, 0xc0044d7830}, 0xdf8475800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:146 +0x2f6\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x51099c8, 0xc0044d7830}, 0xc0057e0100)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go:100 +0x494\\nnet/http.HandlerFunc.ServeHTTP(0x500198, {0x51099c8, 0xc0044d7830}, 0xc00222b2f0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/fil"}
{"Time":"2022-06-24T02:48:07.37815388Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"wrap.go:74 +0xba\\nnet/http.HandlerFunc.ServeHTTP(0x43fc000, {0x510b168, 0xc00bc34c40}, 0x8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withAuditID.func1({0x510b168, 0xc00bc34c40}, 0xc00c2c9f00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/with_auditid.go:66 +0x40d\\nnet/http.HandlerFunc.ServeHTTP(0x0, {0x510b168, 0xc00bc34c40}, 0x40ef94)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189\\nk8s.io/kubernetes/test/integration/framework.startAPIServerOrDie.func1({0x510b168, 0xc00bc34c40}, 0x10000c00222baf0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/framework/controlplane_utils.go:144 +0x45\\nnet/http."}
{"Time":"2022-06-24T02:48:35.371678505Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"Writer).timeout(0xc003d73980, 0xc0060f3540)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:255 +0xd8\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0045a51a0, {0x51099c8, 0xc0019d7bc0}, 0xdf8475800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:146 +0x2f6\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x51099c8, 0xc0019d7bc0}, 0xc005df9f00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go:100 +0x494\\nnet/http.HandlerFunc.ServeHTTP(0x7f7d422c83c8, {0x51099c8, 0xc0019d7bc0}, 0xc004e7c000)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withWaitGroup.func1({0x51099c8, 0xc0019d7bc0}, 0xc005df9f00)\\n"}
{"Time":"2022-06-24T02:48:35.371685828Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:77 +0x766\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x51099c8, 0xc0019d7bc0}, 0x4ffcc08)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithWarningRecorder.func1({0x51099c8, 0xc0019d7bc0}, 0xc006170e00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/warning.go:35 +0x2bb\\nnet/http.HandlerFunc.ServeHTTP(0x43fc000, {0x51099c8, 0xc0019d7bc0}, 0xd)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1({0x51099c8, 0xc0019d7bc0}, 0x4ffcc01)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0x126\\nnet/http.HandlerFunc.ServeHTTP(0x510f7f0, {0x51099c8, 0xc0019d7bc0}, "}
{"Time":"2022-06-24T02:48:35.371693127Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"0x4ffcc08)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.withLogging.func1({0x510b168, 0xc00bdb82a0}, 0xc006170d00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:128 +0x44d\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x510b168, 0xc00bdb82a0}, 0x4ffcc08)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1({0x510b168, 0xc00bdb82a0}, 0xc006170c00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x316\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x510b168, 0xc00bdb82a0}, 0xc0019d77d0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestReceivedTimestampWithClock.func1({0x510b168, 0xc00bdb82a0}, 0xc006170b00)\\n\\t/"}
{"Time":"2022-06-24T02:48:35.371702401Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_received_time.go:38 +0x27e\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x510b168, 0xc00bdb82a0}, 0x4ffcc08)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithMuxAndDiscoveryComplete.func1({0x510b168, 0xc00bdb82a0}, 0xc006170a00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/mux_discovery_complete.go:52 +0x2c2\\nnet/http.HandlerFunc.ServeHTTP(0x160, {0x510b168, 0xc00bdb82a0}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1({0x510b168, 0xc00bdb82a0}, 0xc003aca160)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:74 +0xba\\nnet/http.HandlerFunc.ServeHTTP(0x43fc000, {0x510b"}
{"Time":"2022-06-24T02:48:35.371727035Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"/usr/local/go/src/net/http/server.go:2047 +0x2f\\nnet/http.serverHandler.ServeHTTP({0xc0019d66f0}, {0x510b168, 0xc00bdb82a0}, 0xc006170800)\\n\\t/usr/local/go/src/net/http/server.go:2879 +0x43b\\nnet/http.(*conn).serve(0xc009eb8e60, {0x510ec58, 0xc005080030})\\n\\t/usr/local/go/src/net/http/server.go:1930 +0xb08\\ncreated by net/http.(*Server).Serve\\n\\t/usr/local/go/src/net/http/server.go:3034 +0x4e8\\n\" addedInfo=\"\\nlogging error output: \\\"{\\\\\\\"metadata\\\\\\\":{},\\\\\\\"status\\\\\\\":\\\\\\\"Failure\\\\\\\",\\\\\\\"message\\\\\\\":\\\\\\\"Timeout: request did not complete within the allotted timeout\\\\\\\",\\\\\\\"reason\\\\\\\":\\\\\\\"Timeout\\\\\\\",\\\\\\\"details\\\\\\\":{},\\\\\\\"code\\\\\\\":504}\\\\n\\\"\\n\"\n"}
{"Time":"2022-06-24T02:48:35.371787696Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"/apiserver/pkg/server/filters.(*baseTimeoutWriter).timeout(0xc003d738c0, 0xc0060f3680)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:255 +0xd8\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0045a51a0, {0x51099c8, 0xc004770930}, 0xdf8475800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:146 +0x2f6\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x51099c8, 0xc004770930}, 0xc005752b00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go:100 +0x494\\nnet/http.HandlerFunc.ServeHTTP(0x0, {0x51099c8, 0xc004770930}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withWaitGroup.func1({0x51099c8, 0xc0047"}
{"Time":"2022-06-24T02:48:35.371803211Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"70930}, 0xc005752b00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:77 +0x766\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x51099c8, 0xc004770930}, 0x4ffcc08)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithWarningRecorder.func1({0x51099c8, 0xc004770930}, 0xc005df9e00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/warning.go:35 +0x2bb\\nnet/http.HandlerFunc.ServeHTTP(0x43fc000, {0x51099c8, 0xc004770930}, 0xd)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1({0x51099c8, 0xc004770930}, 0x4ffcc01)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0x126\\nnet/http.HandlerFunc.ServeHTTP(0x510f7f0, {0x5"}
{"Time":"2022-06-24T02:48:35.371819161Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"e0}, 0xc005df9900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_received_time.go:38 +0x27e\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x510b168, 0xc00cc827e0}, 0x4ffcc08)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithMuxAndDiscoveryComplete.func1({0x510b168, 0xc00cc827e0}, 0xc005df9800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/mux_discovery_complete.go:52 +0x2c2\\nnet/http.HandlerFunc.ServeHTTP(0x160, {0x510b168, 0xc00cc827e0}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1({0x510b168, 0xc00cc827e0}, 0xc0039e5e40)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:74 +0xba\\nnet/http.HandlerFunc.Serve"}
{"Time":"2022-06-24T02:48:35.371837798Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"e0}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nnet/http.serverHandler.ServeHTTP({0x50f86e0}, {0x510b168, 0xc00cc827e0}, 0xc006170900)\\n\\t/usr/local/go/src/net/http/server.go:2879 +0x43b\\nnet/http.(*conn).serve(0xc009eb8f00, {0x510ec58, 0xc005080030})\\n\\t/usr/local/go/src/net/http/server.go:1930 +0xb08\\ncreated by net/http.(*Server).Serve\\n\\t/usr/local/go/src/net/http/server.go:3034 +0x4e8\\n\" addedInfo=\"\\nlogging error output: \\\"{\\\\\\\"metadata\\\\\\\":{},\\\\\\\"status\\\\\\\":\\\\\\\"Failure\\\\\\\",\\\\\\\"message\\\\\\\":\\\\\\\"Timeout: request did not complete within the allotted timeout\\\\\\\",\\\\\\\"reason\\\\\\\":\\\\\\\"Timeout\\\\\\\",\\\\\\\"details\\\\\\\":{},\\\\\\\"code\\\\\\\":504}\\\\n\\\"\\n\"\n"}
{"Time":"2022-06-24T02:48:39.751368291Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestScaleAllResources","Output":"erver/pkg/server/filters/timeout.go:255 +0xd8\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0561aec90, {0x554cff8, 0xc0caabfdd0}, 0xdf8475800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:146 +0x2f6\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x554cff8, 0xc0caabfdd0}, 0xc0caab9b00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go:100 +0x494\\nnet/http.HandlerFunc.ServeHTTP(0x416266, {0x554cff8, 0xc0caabfdd0}, 0x7facd2254fff)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withWaitGroup.func1({0x554cff8, 0xc0caabfdd0}, 0xc0caab9b00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:77 +0x766\\n"}
{"Time":"2022-06-24T02:48:39.751393624Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestScaleAllResources","Output":":38 +0x27e\\nnet/http.HandlerFunc.ServeHTTP(0x5552878, {0x554eb88, 0xc0d2c9b260}, 0x5436fd8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithMuxAndDiscoveryComplete.func1({0x554eb88, 0xc0d2c9b260}, 0xc0caab9600)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/mux_discovery_complete.go:52 +0x2c2\\nnet/http.HandlerFunc.ServeHTTP(0x160, {0x554eb88, 0xc0d2c9b260}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1({0x554eb88, 0xc0d2c9b260}, 0xc0d31678c0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:74 +0xba\\nnet/http.HandlerFunc.ServeHTTP(0x4794d60, {0x554eb88, 0xc0d2c9b260}, 0x8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.wit"}
{"Time":"2022-06-24T02:48:39.751419385Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestScaleAllResources","Output":"sr/local/go/src/net/http/server.go:2879 +0x43b\\nnet/http.(*conn).serve(0xc0af86e780, {0x5552878, 0xc0b02d8630})\\n\\t/usr/local/go/src/net/http/server.go:1930 +0xb08\\ncreated by net/http.(*Server).Serve\\n\\t/usr/local/go/src/net/http/server.go:3034 +0x4e8\\n\" addedInfo=\"\\nlogging error output: \\\"{\\\\\\\"metadata\\\\\\\":{},\\\\\\\"status\\\\\\\":\\\\\\\"Failure\\\\\\\",\\\\\\\"message\\\\\\\":\\\\\\\"Timeout: request did not complete within the allotted timeout\\\\\\\",\\\\\\\"reason\\\\\\\":\\\\\\\"Timeout\\\\\\\",\\\\\\\"details\\\\\\\":{},\\\\\\\"code\\\\\\\":504}\\\\n\\\"\\n\"\n"}
{"Time":"2022-06-24T02:48:47.143417569Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestScaleAllResources","Output":"ome/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:255 +0xd8\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0561aec90, {0x554cff8, 0xc0d3602f30}, 0xdf8475800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:146 +0x2f6\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x554cff8, 0xc0d3602f30}, 0xc0caa03500)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go:100 +0x494\\nnet/http.HandlerFunc.ServeHTTP(0x500cd8, {0x554cff8, 0xc0d3602f30}, 0xc0d322b2f0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withWaitGroup.func1({0x554cff8, 0xc0d3602f30}, 0xc0caa03500)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/"}
{"Time":"2022-06-24T02:48:47.143425966Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestScaleAllResources","Output":"src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:77 +0x766\\nnet/http.HandlerFunc.ServeHTTP(0x5552878, {0x554cff8, 0xc0d3602f30}, 0x5436fd8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithWarningRecorder.func1({0x554cff8, 0xc0d3602f30}, 0xc0caa03400)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/warning.go:35 +0x2bb\\nnet/http.HandlerFunc.ServeHTTP(0x4794d60, {0x554cff8, 0xc0d3602f30}, 0xd)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1({0x554cff8, 0xc0d3602f30}, 0x5436f01)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0x126\\nnet/http.HandlerFunc.ServeHTTP(0x55538e0, {0x554cff8, 0xc0d3602f30}, 0x5436fd8)\\n\\t/usr/local/go/src/net/http/server.go:2047"}
{"Time":"2022-06-24T02:48:47.143433718Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestScaleAllResources","Output":" +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.withLogging.func1({0x554eb88, 0xc0d360a000}, 0xc0caa03300)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:128 +0x44d\\nnet/http.HandlerFunc.ServeHTTP(0x5552878, {0x554eb88, 0xc0d360a000}, 0x5436fd8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1({0x554eb88, 0xc0d360a000}, 0xc0caa03200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x316\\nnet/http.HandlerFunc.ServeHTTP(0x5552878, {0x554eb88, 0xc0d360a000}, 0xc0d3602e70)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestReceivedTimestampWithClock.func1({0x554eb88, 0xc0d360a000}, 0xc0caa03100)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src"}
{"Time":"2022-06-24T02:48:47.14344096Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestScaleAllResources","Output":"/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_received_time.go:38 +0x27e\\nnet/http.HandlerFunc.ServeHTTP(0x5552878, {0x554eb88, 0xc0d360a000}, 0x5436fd8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithMuxAndDiscoveryComplete.func1({0x554eb88, 0xc0d360a000}, 0xc0caa03000)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/mux_discovery_complete.go:52 +0x2c2\\nnet/http.HandlerFunc.ServeHTTP(0x160, {0x554eb88, 0xc0d360a000}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1({0x554eb88, 0xc0d360a000}, 0xc0d3608160)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:74 +0xba\\nnet/http.HandlerFunc.ServeHTTP(0x4794d60, {0x554eb88, 0xc0d360a000}, 0x8)\\n\\t/usr/local/go/src/net/http/"}
{"Time":"2022-06-24T02:48:47.143448203Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestScaleAllResources","Output":"server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withAuditID.func1({0x554eb88, 0xc0d360a000}, 0xc0d348b900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/with_auditid.go:66 +0x40d\\nnet/http.HandlerFunc.ServeHTTP(0xc0d0a76c48, {0x554eb88, 0xc0d360a000}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189\\nk8s.io/kubernetes/test/integration/framework.startAPIServerOrDie.func1({0x554eb88, 0xc0d360a000}, 0x3be31e2)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/framework/controlplane_utils.go:144 +0x45\\nnet/http.HandlerFunc.ServeHTTP(0x0, {0x554eb88, 0xc0d360a000}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nnet/http.server"}
{"Time":"2022-06-24T02:48:49.056348784Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestScaleAllResources","Output":"dor/k8s.io/apiserver/pkg/server/filters/timeout.go:255 +0xd8\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0561aec90, {0x554cff8, 0xc0d2f679b0}, 0xdf8475800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:146 +0x2f6\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x554cff8, 0xc0d2f679b0}, 0xc0d3714b00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go:100 +0x494\\nnet/http.HandlerFunc.ServeHTTP(0x4e, {0x554cff8, 0xc0d2f679b0}, 0xc0d2a952f0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withWaitGroup.func1({0x554cff8, 0xc0d2f679b0}, 0xc0d3714b00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:77"}
{"Time":"2022-06-24T02:48:49.056356075Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestScaleAllResources","Output":" +0x766\\nnet/http.HandlerFunc.ServeHTTP(0x5552878, {0x554cff8, 0xc0d2f679b0}, 0x5436fd8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithWarningRecorder.func1({0x554cff8, 0xc0d2f679b0}, 0xc0d3714a00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/warning.go:35 +0x2bb\\nnet/http.HandlerFunc.ServeHTTP(0x4794d60, {0x554cff8, 0xc0d2f679b0}, 0xd)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1({0x554cff8, 0xc0d2f679b0}, 0x5436f01)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0x126\\nnet/http.HandlerFunc.ServeHTTP(0x55538e0, {0x554cff8, 0xc0d2f679b0}, 0x5436fd8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.withLogging"}
{"Time":"2022-06-24T02:48:49.056370285Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestScaleAllResources","Output":"d_time.go:38 +0x27e\\nnet/http.HandlerFunc.ServeHTTP(0x5552878, {0x554eb88, 0xc0d27389a0}, 0x5436fd8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithMuxAndDiscoveryComplete.func1({0x554eb88, 0xc0d27389a0}, 0xc0d3714600)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/mux_discovery_complete.go:52 +0x2c2\\nnet/http.HandlerFunc.ServeHTTP(0x160, {0x554eb88, 0xc0d27389a0}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1({0x554eb88, 0xc0d27389a0}, 0xc0d2f3fce0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:74 +0xba\\nnet/http.HandlerFunc.ServeHTTP(0x4794d60, {0x554eb88, 0xc0d27389a0}, 0x8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/fi"}
{"Time":"2022-06-24T02:48:49.056389936Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver/apply","Test":"TestScaleAllResources","Output":"00)\\n\\t/usr/local/go/src/net/http/server.go:2879 +0x43b\\nnet/http.(*conn).serve(0xc0d3138500, {0x5552878, 0xc0b02d8630})\\n\\t/usr/local/go/src/net/http/server.go:1930 +0xb08\\ncreated by net/http.(*Server).Serve\\n\\t/usr/local/go/src/net/http/server.go:3034 +0x4e8\\n\" addedInfo=\"\\nlogging error output: \\\"{\\\\\\\"metadata\\\\\\\":{},\\\\\\\"status\\\\\\\":\\\\\\\"Failure\\\\\\\",\\\\\\\"message\\\\\\\":\\\\\\\"Timeout: request did not complete within the allotted timeout\\\\\\\",\\\\\\\"reason\\\\\\\":\\\\\\\"Timeout\\\\\\\",\\\\\\\"details\\\\\\\":{},\\\\\\\"code\\\\\\\":504}\\\\n\\\"\\n\"\n"}
{"Time":"2022-06-24T02:49:10.727057276Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/apiserver","Test":"TestWatchCacheUpdatedByEtcd","Output":"WARNING: 2022/06/24 02:49:10 [core] grpc: Server.processUnaryRPC failed to write status connection error: desc = \"transport is closing\"\n"}
{"Time":"2022-06-24T02:49:44.51287595Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"c00697a1e0, 0xc00483ef00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:255 +0xd8\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0045a51a0, {0x51099c8, 0xc006919290}, 0xdf8475800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:146 +0x2f6\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x51099c8, 0xc006919290}, 0xc005c62400)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go:100 +0x494\\nnet/http.HandlerFunc.ServeHTTP(0xc005c62400, {0x51099c8, 0xc006919290}, 0x3)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withWaitGroup.func1({0x51099c8, 0xc006919290}, 0xc005c62400)\\n\\t/home/prow/go/src/k8s.io/ku"}
{"Time":"2022-06-24T02:49:44.512884227Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"bernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:77 +0x766\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x51099c8, 0xc006919290}, 0x4ffcc08)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithWarningRecorder.func1({0x51099c8, 0xc006919290}, 0xc0059bd000)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/warning.go:35 +0x2bb\\nnet/http.HandlerFunc.ServeHTTP(0x43fc000, {0x51099c8, 0xc006919290}, 0xd)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1({0x51099c8, 0xc006919290}, 0x4ffcc01)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0x126\\nnet/http.HandlerFunc.ServeHTTP(0x510f7f0, {0x51099c8, 0xc006919290}, 0x4ffcc08)\\n\\t/usr/local/go/s"}
{"Time":"2022-06-24T02:49:44.512892025Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"rc/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.withLogging.func1({0x510b168, 0xc00b9b20e0}, 0xc005abcd00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:128 +0x44d\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x510b168, 0xc00b9b20e0}, 0x4ffcc08)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1({0x510b168, 0xc00b9b20e0}, 0xc0057e0e00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x316\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x510b168, 0xc00b9b20e0}, 0xc006588d50)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestReceivedTimestampWithClock.func1({0x510b168, 0xc00b9b20e0}, 0xc0057e0d00)\\n\\t/home/prow/go/src/k8s.io/kuber"}
{"Time":"2022-06-24T02:49:44.512900469Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"netes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_received_time.go:38 +0x27e\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x510b168, 0xc00b9b20e0}, 0x4ffcc08)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithMuxAndDiscoveryComplete.func1({0x510b168, 0xc00b9b20e0}, 0xc0059bc800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/mux_discovery_complete.go:52 +0x2c2\\nnet/http.HandlerFunc.ServeHTTP(0x0, {0x510b168, 0xc00b9b20e0}, 0x1729586)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1({0x510b168, 0xc00b9b20e0}, 0x206)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:74 +0xba\\nnet/http.HandlerFunc.ServeHTTP(0x43fc000, {0x510b168, 0xc00b9b20e0}, 0x8)\\n\\t/usr"}
{"Time":"2022-06-24T02:49:44.512908425Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withAuditID.func1({0x510b168, 0xc00b9b20e0}, 0xc005c8cb00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/with_auditid.go:66 +0x40d\\nnet/http.HandlerFunc.ServeHTTP(0x0, {0x510b168, 0xc00b9b20e0}, 0x46acc1)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189\\nk8s.io/kubernetes/test/integration/framework.startAPIServerOrDie.func1({0x510b168, 0xc00b9b20e0}, 0x246)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/framework/controlplane_utils.go:144 +0x45\\nnet/http.HandlerFunc.ServeHTTP(0x0, {0x510b168, 0xc00b9b20e0}, 0x72656c6c6f72746e)\\n\\t/usr/local/go/src/net/http/server."}
{"Time":"2022-06-24T02:49:44.512926852Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"go:2047 +0x2f\\nnet/http.serverHandler.ServeHTTP({0x50f86e0}, {0x510b168, 0xc00b9b20e0}, 0xc005c8cb00)\\n\\t/usr/local/go/src/net/http/server.go:2879 +0x43b\\nnet/http.(*conn).serve(0xc0060f3720, {0x510ec58, 0xc005080030})\\n\\t/usr/local/go/src/net/http/server.go:1930 +0xb08\\ncreated by net/http.(*Server).Serve\\n\\t/usr/local/go/src/net/http/server.go:3034 +0x4e8\\n\" addedInfo=\"\\nlogging error output: \\\"{\\\\\\\"metadata\\\\\\\":{},\\\\\\\"status\\\\\\\":\\\\\\\"Failure\\\\\\\",\\\\\\\"message\\\\\\\":\\\\\\\"Timeout: request did not complete within the allotted timeout\\\\\\\",\\\\\\\"reason\\\\\\\":\\\\\\\"Timeout\\\\\\\",\\\\\\\"details\\\\\\\":{},\\\\\\\"code\\\\\\\":504}\\\\n\\\"\\n\"\n"}
{"Time":"2022-06-24T02:49:50.378549118Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"b40, 0xc001ce6b40)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:255 +0xd8\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP(0xc0045a51a0, {0x51099c8, 0xc006ed6ed0}, 0xdf8475800)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:146 +0x2f6\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestDeadline.func1({0x51099c8, 0xc006ed6ed0}, 0xc0059bd700)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_deadline.go:100 +0x494\\nnet/http.HandlerFunc.ServeHTTP(0xc000ec8c5a, {0x51099c8, 0xc006ed6ed0}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withWaitGroup.func1({0x51099c8, 0xc006ed6ed0}, 0xc0059bd700)\\n\\t/home/prow/go/src/k8s.io/kubernete"}
{"Time":"2022-06-24T02:49:50.378559172Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"s/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/waitgroup.go:77 +0x766\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x51099c8, 0xc006ed6ed0}, 0x4ffcc08)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithWarningRecorder.func1({0x51099c8, 0xc006ed6ed0}, 0xc005cb8700)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/warning.go:35 +0x2bb\\nnet/http.HandlerFunc.ServeHTTP(0xc0045a51d0, {0x51099c8, 0xc006ed6ed0}, 0xc005cb8700)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithCacheControl.func1({0x51099c8, 0xc006ed6ed0}, 0xc0046d36a0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/cachecontrol.go:31 +0x126\\nnet/http.HandlerFunc.ServeHTTP(0x510f7f0, {0x51099c8, 0xc006ed6ed0}, 0x4ffcc08)\\n\\t/usr/lo"}
{"Time":"2022-06-24T02:49:50.378567185Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"cal/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.withLogging.func1({0x510b168, 0xc009fb5180}, 0xc0059bd500)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:128 +0x44d\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x510b168, 0xc009fb5180}, 0x4ffcc08)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithRequestInfo.func1({0x510b168, 0xc009fb5180}, 0xc0059bd400)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/requestinfo.go:39 +0x316\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x510b168, 0xc009fb5180}, 0xc00697a960)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withRequestReceivedTimestampWithClock.func1({0x510b168, 0xc009fb5180}, 0xc0059bd300)\\n\\t/home/prow/go/src/k8s."}
{"Time":"2022-06-24T02:49:50.378576813Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/request_received_time.go:38 +0x27e\\nnet/http.HandlerFunc.ServeHTTP(0x510ec58, {0x510b168, 0xc009fb5180}, 0x4ffcc08)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.WithMuxAndDiscoveryComplete.func1({0x510b168, 0xc009fb5180}, 0xc005abd300)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/mux_discovery_complete.go:52 +0x2c2\\nnet/http.HandlerFunc.ServeHTTP(0x160, {0x510b168, 0xc009fb5180}, 0x0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.withPanicRecovery.func1({0x510b168, 0xc009fb5180}, 0xc006766b00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/wrap.go:74 +0xba\\nnet/http.HandlerFunc.ServeHTTP(0x43fc000, {0x510b168, 0xc009fb5180}, 0"}
{"Time":"2022-06-24T02:49:50.378584049Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"x8)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters.withAuditID.func1({0x510b168, 0xc009fb5180}, 0xc005cb8200)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filters/with_auditid.go:66 +0x40d\\nnet/http.HandlerFunc.ServeHTTP(0x0, {0x510b168, 0xc009fb5180}, 0xc0046d3ab0)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.(*APIServerHandler).ServeHTTP(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:189\\nk8s.io/kubernetes/test/integration/framework.startAPIServerOrDie.func1({0x510b168, 0xc009fb5180}, 0xc0046d3b70)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/framework/controlplane_utils.go:144 +0x45\\nnet/http.HandlerFunc.ServeHTTP(0x0, {0x510b168, 0xc009fb5180}, 0x468aee)\\n\\t/usr/local/go/src/net/"}
{"Time":"2022-06-24T02:49:50.378599267Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/deployment","Test":"TestDeploymentRollingUpdate","Output":"http/server.go:2047 +0x2f\\nnet/http.serverHandler.ServeHTTP({0xc006588750}, {0x510b168, 0xc009fb5180}, 0xc005cb8200)\\n\\t/usr/local/go/src/net/http/server.go:2879 +0x43b\\nnet/http.(*conn).serve(0xc009eb80a0, {0x510ec58, 0xc005080030})\\n\\t/usr/local/go/src/net/http/server.go:1930 +0xb08\\ncreated by net/http.(*Server).Serve\\n\\t/usr/local/go/src/net/http/server.go:3034 +0x4e8\\n\" addedInfo=\"\\nlogging error output: \\\"{\\\\\\\"metadata\\\\\\\":{},\\\\\\\"status\\\\\\\":\\\\\\\"Failure\\\\\\\",\\\\\\\"message\\\\\\\":\\\\\\\"Timeout: request did not complete within the allotted timeout\\\\\\\",\\\\\\\"reason\\\\\\\":\\\\\\\"Timeout\\\\\\\",\\\\\\\"details\\\\\\\":{},\\\\\\\"code\\\\\\\":504}\\\\n\\\"\\n\"\n"}
{"Time":"2022-06-24T02:50:31.867358526Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/endpoints","Output":"ok  \tk8s.io/kubernetes/test/integration/endpoints\t12.521s\n"}
{"Time":"2022-06-24T02:50:40.396936987Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/dryrun","Output":"ok  \tk8s.io/kubernetes/test/integration/dryrun\t23.761s\n"}
{"Time":"2022-06-24T02:50:52.619640479Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/events","Output":"ok  \tk8s.io/kubernetes/test/integration/events\t5.537s\n"}
{"Time":"2022-06-24T02:51:14.120145015Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/endpointslice","Output":"ok  \tk8s.io/kubernetes/test/integration/endpointslice\t51.411s\n"}
{"Time":"2022-06-24T02:51:22.06829842Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/evictions","Output":"ok  \tk8s.io/kubernetes/test/integration/evictions\t22.958s\n"}
{"Time":"2022-06-24T02:51:31.201870766Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/garbagecollector","Test":"TestClusterScopedOwners","Output":"bs batch/v1, Resource=jobs certificates.k8s.io/v1, Resource=certificatesigningrequests coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1beta2, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1beta2, Resource=prioritylevelconfigurations internal.apiserver.k8s.io/v1alpha1, Resource=storageversions networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses policy/v1, Resource=poddisruptionbudgets policy/v1beta1, Resource=podsecuritypolicies rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles scheduling.k8s.io/v1, Resource=priorityclasses storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=stor"}
... skipping 131 lines ...
{"Time":"2022-06-24T02:52:32.928888142Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/pods","Test":"TestPodReadOnlyFilesystem","Output":"/pkg/runtime/serializer/versioning/versioning.go:191 +0x106\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.SerializeObject({0x48c80af, 0x10}, {0x7f5b1849e878, 0xc0019bf900}, {0x50b9648, 0xc004928480}, 0xc008ff5a00, 0x1f8, {0x50a46a0, 0xc0019bf860})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:106 +0x4a9\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.WriteObjectNegotiated({0x50bbe38, 0xc0063a4880}, {0x50bc018, 0x7a872a8}, {{0x0, 0xc00630a9b0}, {0x48a9f44, 0x4fc6c80}}, {0x50b9648, 0xc004928480}, ...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:274 +0x505\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.ErrorNegotiated({0x4fc6c80, 0xc0019bf7c0}, {0x50bbe38, 0xc0063a4880}, {{0x0, 0x0}, {0x48a9f44, 0xc"}
{"Time":"2022-06-24T02:52:32.928904095Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/pods","Test":"TestPodReadOnlyFilesystem","Output":"ics.InstrumentRouteFunc.func1(0xc0049283c0, 0xc004997ea0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:509 +0x223\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc006393680, {0x50b9648, 0xc004f2dfb0}, 0xc008ff5a00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:288 +0x9d1\\nk8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).Dispatch(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:199\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.director.ServeHTTP({{0x48c19b5, 0x4fad410}, 0xc006393680, 0xc003ef3d50}, {0x50b9648, 0xc004f2dfb0}, 0xc008ff5a00)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/handler.go:146 +0x5bb\\nk8s.io/kubernetes/vendor/"}
{"Time":"2022-06-24T02:52:32.928922424Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/pods","Test":"TestPodReadOnlyFilesystem","Output":"ndor/k8s.io/apiserver/pkg/endpoints/filters/authorization.go:64 +0x425\\nnet/http.HandlerFunc.ServeHTTP(0x7a872a8, {0x50b9648, 0xc004f2dfb0}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x50b9648, 0xc004f2dfb0}, 0xc008ff5900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc004928090, {0x50b9648, 0xc004f2dfb0}, 0x79f12a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x50b9648, 0xc004f2dfb0}, 0xc008ff5900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0xc0041bc440, {0x50b9648, 0xc004f2dfb0}, 0xc0020d6fc0)\\n\\t/usr/local/go/src/net/ht"}
{"Time":"2022-06-24T02:52:32.928930928Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/pods","Test":"TestPodReadOnlyFilesystem","Output":"tp/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPriorityAndFairness.func1.9()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go:277 +0x118\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle.func2()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go:191 +0x1eb\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.immediateRequest.Finish(...)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_controller.go:793\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol.(*configController).Handle(0xc0051def00, {0x50be7f8, 0xc004928150}, {0xc0029ab600, {0x50bf208, 0xc0060fdc00}}, 0x1, 0x50a0fc8, 0xc0022698c0, 0xc00223d180)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_out"}
{"Time":"2022-06-24T02:52:32.928950758Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/pods","Test":"TestPodReadOnlyFilesystem","Output":"put/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/util/flowcontrol/apf_filter.go:181 +0x853\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.WithPriorityAndFairness.func1({0x50b9648, 0xc004f2dfb0}, 0xc008ff5900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/priority-and-fairness.go:280 +0xdda\\nnet/http.HandlerFunc.ServeHTTP(0x7a872a8, {0x50b9648, 0xc004f2dfb0}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x50b9648, 0xc004f2dfb0}, 0xc008ff5900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc004928090, {0x50b9648, 0xc004f2dfb0}, 0x79f12a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompl"}
{"Time":"2022-06-24T02:52:32.928965048Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/pods","Test":"TestPodReadOnlyFilesystem","Output":"o:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc004928090, {0x50b9648, 0xc004f2dfb0}, 0x79f12a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackCompleted.func1({0x50b9648, 0xc004f2dfb0}, 0xc008ff5900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:104 +0x1a5\\nnet/http.HandlerFunc.ServeHTTP(0x7a872a8, {0x50b9648, 0xc004f2dfb0}, 0xa)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency.trackStarted.func1({0x50b9648, 0xc004f2dfb0}, 0xc008ff5900)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/filterlatency/filterlatency.go:80 +0x178\\nnet/http.HandlerFunc.ServeHTTP(0xc004928090, {0x50b9648, 0xc004f2dfb0}, 0x79f12a)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiser"}
{"Time":"2022-06-24T02:52:32.928983514Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/pods","Test":"TestPodReadOnlyFilesystem","Output":"iserver/pkg/endpoints/filterlatency/filterlatency.go:89 +0x46b\\nnet/http.HandlerFunc.ServeHTTP(0x100000000000001, {0x50b9648, 0xc004f2dfb0}, 0xc001c81140)\\n\\t/usr/local/go/src/net/http/server.go:2047 +0x2f\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP.func1()\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:114 +0x70\\ncreated by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters.(*timeoutHandler).ServeHTTP\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:100 +0x1dd\\n\" addedInfo=\"\\nlogging error output: \\\"{\\\\\\\"kind\\\\\\\":\\\\\\\"Status\\\\\\\",\\\\\\\"apiVersion\\\\\\\":\\\\\\\"v1\\\\\\\",\\\\\\\"metadata\\\\\\\":{},\\\\\\\"status\\\\\\\":\\\\\\\"Failure\\\\\\\",\\\\\\\"message\\\\\\\":\\\\\\\"Timeout: request did not complete within requested timeout - context canceled\\\\\\\",\\\\\\\"reason\\\\\\\":\\\\\\\"Timeout\\\\\\\",\\\\\\\"details\\\\\\\":{},\\\\\\\"code\\\\\\\":504}\\\\n\\\"\\n\"\n"}
{"Time":"2022-06-24T02:52:36.598164676Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/garbagecollector","Test":"TestSolidOwnerDoesNotBlockWaitingOwner","Output":"bs batch/v1, Resource=jobs certificates.k8s.io/v1, Resource=certificatesigningrequests coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1beta2, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1beta2, Resource=prioritylevelconfigurations internal.apiserver.k8s.io/v1alpha1, Resource=storageversions networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses policy/v1, Resource=poddisruptionbudgets policy/v1beta1, Resource=podsecuritypolicies rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles scheduling.k8s.io/v1, Resource=priorityclasses storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=stor"}
{"Time":"2022-06-24T02:52:36.598173119Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/garbagecollector","Test":"TestSolidOwnerDoesNotBlockWaitingOwner","Output":"ageclasses storage.k8s.io/v1, Resource=volumeattachments storage.k8s.io/v1beta1, Resource=csistoragecapacities], removed: []\n"}
{"Time":"2022-06-24T02:52:42.010998419Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/garbagecollector","Test":"TestNonBlockingOwnerRefDoesNotBlock","Output":"bs batch/v1, Resource=jobs certificates.k8s.io/v1, Resource=certificatesigningrequests coordination.k8s.io/v1, Resource=leases discovery.k8s.io/v1, Resource=endpointslices events.k8s.io/v1, Resource=events flowcontrol.apiserver.k8s.io/v1beta2, Resource=flowschemas flowcontrol.apiserver.k8s.io/v1beta2, Resource=prioritylevelconfigurations internal.apiserver.k8s.io/v1alpha1, Resource=storageversions networking.k8s.io/v1, Resource=ingressclasses networking.k8s.io/v1, Resource=ingresses networking.k8s.io/v1, Resource=networkpolicies node.k8s.io/v1, Resource=runtimeclasses policy/v1, Resource=poddisruptionbudgets policy/v1beta1, Resource=podsecuritypolicies rbac.authorization.k8s.io/v1, Resource=clusterrolebindings rbac.authorization.k8s.io/v1, Resource=clusterroles rbac.authorization.k8s.io/v1, Resource=rolebindings rbac.authorization.k8s.io/v1, Resource=roles scheduling.k8s.io/v1, Resource=priorityclasses storage.k8s.io/v1, Resource=csidrivers storage.k8s.io/v1, Resource=csinodes storage.k8s.io/v1, Resource=stor"}
{"Time":"2022-06-24T02:52:42.011005834Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/garbagecollector","Test":"TestNonBlockingOwnerRefDoesNotBlock","Output":"ageclasses storage.k8s.io/v1, Resource=volumeattachments storage.k8s.io/v1beta1, Resource=csistoragecapacities], removed: []\n"}
{"Time":"2022-06-24T02:52:46.696364181Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/quota","Test":"TestQuota","Output":"s/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/filters/timeout.go:236 +0xf1\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics.(*ResponseWriterDelegator).WriteHeader(0x3cb78a0, 0xc006d7c9a0)\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/metrics/metrics.go:652 +0x29\\nk8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters.(*deferredResponseWriter).Write(0xc006a6c960, {0xc00512a000, 0xfb, 0xaac})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/endpoints/handlers/responsewriters/writers.go:229 +0x5bb\\nencoding/json.(*Encoder).Encode(0xc005638c30, {0x4843380, 0xc0061cf040})\\n\\t/usr/local/go/src/encoding/json/stream.go:231 +0x1f6\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).doEncode(0x48f0dc6, {0x50f2d68, 0xc0061cf040}, {0x5014c00, 0xc006a6c960})\\n\\t/home/prow/go/src/k8s.io/kuber"}
{"Time":"2022-06-24T02:52:46.696373224Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/quota","Test":"TestQuota","Output":"netes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:233 +0x19a\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json.(*Serializer).Encode(0xc0000d10e0, {0x50f2d68, 0xc0061cf040}, {0x5014c00, 0xc006a6c960})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/json/json.go:207 +0xfc\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).doEncode(0xc0061cf0e0, {0x50f2d68, 0xc0061cf040}, {0x5014c00, 0xc006a6c960})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning/versioning.go:235 +0xb62\\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/runtime/serializer/versioning.(*codec).Encode(0xc0061cf0e0, {0x50f2d68, 0xc0061cf040}, {0x5014c00, 0xc006a6c960})\\n\\t/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apima"}
... skipping 265 lines ...
{"Time":"2022-06-24T02:54:59.816656036Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/statefulset","Test":"TestStatefulSetAvailable","Output":":map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:sts UID:732ef000-5273-4b68-91cb-8b99cc06793c Controller:0xc0076b286a BlockOwnerDeletion:0xc0076b286b}] Finalizers:[] ClusterName: ManagedFields:[]}.\n"}
{"Time":"2022-06-24T02:54:59.830295675Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/statefulset","Test":"TestStatefulSetAvailable","Output":":map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:sts UID:732ef000-5273-4b68-91cb-8b99cc06793c Controller:0xc0081982ca BlockOwnerDeletion:0xc0081982cb}] Finalizers:[] ClusterName: ManagedFields:[]}.\n"}
{"Time":"2022-06-24T02:54:59.845571245Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/statefulset","Test":"TestStatefulSetAvailable","Output":":map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:sts UID:732ef000-5273-4b68-91cb-8b99cc06793c Controller:0xc0081d644a BlockOwnerDeletion:0xc0081d644b}] Finalizers:[] ClusterName: ManagedFields:[]}.\n"}
{"Time":"2022-06-24T02:54:59.857791245Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/statefulset","Test":"TestStatefulSetAvailable","Output":":map[] OwnerReferences:[{APIVersion:apps/v1 Kind:StatefulSet Name:sts UID:732ef000-5273-4b68-91cb-8b99cc06793c Controller:0xc008080faa BlockOwnerDeletion:0xc008080fab}] Finalizers:[] ClusterName: ManagedFields:[]}.\n"}
{"Time":"2022-06-24T02:55:00.098462023Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/statefulset","Output":"ok  \tk8s.io/kubernetes/test/integration/statefulset\t33.698s\n"}
{"Time":"2022-06-24T02:55:01.758899974Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/storageversion","Output":"ok  \tk8s.io/kubernetes/test/integration/storageversion\t33.245s\n"}
{"Time":"202{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:165","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","severity":"error","time":"2022-06-24T04:19:07Z"}
++ early_exit_handler
++ '[' -n 171 ']'
++ kill -TERM 171
++ cleanup_dind
++ [[ true == \t\r\u\e ]]
++ echo 'Cleaning up after docker'
... skipping 9 lines ...
================================================================================
Cleaning up after docker
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
================================================================================
Done cleaning up after docker in docker.
{"component":"entrypoint","file":"k8s.io/test-infra/prow/entrypoint/run.go:255","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15m0s grace period","severity":"error","time":"2022-06-24T04:34:07Z"}
{"component":"entrypoint","error":"os: process already finished","file":"k8s.io/test-infra/prow/entrypoint/run.go:257","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Could not kill process after grace period","severity":"error","time":"2022-06-24T04:34:07Z"}