This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtanjunchen: remove prometheus dependencies from k/k
ResultFAILURE
Tests 0 failed / 128 succeeded
Started2020-03-26 05:11
Elapsed21m56s
Revision72410aa8b67149e9a50df9fbc2683d62a2738316
Refs 89285
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/b861f662-ce94-431f-b773-f3b8ff2e2173/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/b861f662-ce94-431f-b773-f3b8ff2e2173/targets/test

No Test Failures!


Show 128 Passed Tests

Error lines from build-log.txt

... skipping 50 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 155: bogus-expected-to-fail: command not found
!!! [0326 05:15:42] Call tree:
!!! [0326 05:15:42]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0326 05:15:42]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0326 05:15:42]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:131 juLog(...)
!!! [0326 05:15:42]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:159 record_command(...)
!!! [0326 05:15:42]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0326 05:15:42] Running kubeadm tests
warning: ignoring symlink /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes
go: warning: "k8s.io/kubernetes/vendor/github.com/go-bindata/go-bindata/..." matched no packages
+++ [0326 05:15:47] Building go targets for linux/amd64:
    cmd/kubeadm
warning: ignoring symlink /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes
... skipping 318 lines ...
go: warning: "k8s.io/kubernetes/vendor/github.com/go-bindata/go-bindata/..." matched no packages
+++ [0326 05:19:39] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [0326 05:20:07] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0326 05:20:08.167996   55871 serving.go:329] Generated self-signed cert in-memory
W0326 05:20:09.187271   55871 authentication.go:409] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0326 05:20:09.187315   55871 authentication.go:268] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0326 05:20:09.187323   55871 authentication.go:292] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0326 05:20:09.187337   55871 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0326 05:20:09.187363   55871 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0326 05:20:09.187382   55871 controllermanager.go:161] Version: v1.19.0-alpha.1.5+60599ce55397cc
I0326 05:20:09.188780   55871 secure_serving.go:178] Serving securely on [::]:10257
I0326 05:20:09.188899   55871 tlsconfig.go:240] Starting DynamicServingCertificateController
I0326 05:20:09.190549   55871 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0326 05:20:09.190840   55871 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
... skipping 111 lines ...
W0326 05:20:09.726135   55871 controllermanager.go:512] "tokencleaner" is disabled
I0326 05:20:09.726145   55871 shared_informer.go:225] Waiting for caches to sync for GC
I0326 05:20:09.726466   55871 controllermanager.go:533] Started "pvc-protection"
I0326 05:20:09.726607   55871 pvc_protection_controller.go:101] Starting PVC protection controller
I0326 05:20:09.726629   55871 shared_informer.go:225] Waiting for caches to sync for PVC protection
I0326 05:20:09.726734   55871 node_lifecycle_controller.go:78] Sending events to api server
E0326 05:20:09.726765   55871 core.go:229] failed to start cloud node lifecycle controller: no cloud provider provided
W0326 05:20:09.726775   55871 controllermanager.go:525] Skipping "cloud-node-lifecycle"
I0326 05:20:09.727072   55871 controllermanager.go:533] Started "pv-protection"
W0326 05:20:09.727090   55871 controllermanager.go:525] Skipping "root-ca-cert-publisher"
I0326 05:20:09.727217   55871 pv_protection_controller.go:83] Starting PV protection controller
I0326 05:20:09.727244   55871 shared_informer.go:225] Waiting for caches to sync for PV protection
I0326 05:20:09.727375   55871 controllermanager.go:533] Started "serviceaccount"
... skipping 61 lines ...
I0326 05:20:10.237768   55871 controllermanager.go:533] Started "ttl"
I0326 05:20:10.237901   55871 ttl_controller.go:118] Starting TTL controller
I0326 05:20:10.237912   55871 shared_informer.go:225] Waiting for caches to sync for TTL
I0326 05:20:10.238226   55871 controllermanager.go:533] Started "endpointslice"
I0326 05:20:10.238341   55871 endpointslice_controller.go:237] Starting endpoint slice controller
I0326 05:20:10.238350   55871 shared_informer.go:225] Waiting for caches to sync for endpoint_slice
E0326 05:20:10.238556   55871 core.go:89] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0326 05:20:10.238571   55871 controllermanager.go:525] Skipping "service"
+++ [0326 05:20:10] Testing kubectl version: check client only output matches expected output
I0326 05:20:10.316226   55871 shared_informer.go:232] Caches are synced for certificate-csrapproving 
I0326 05:20:10.324873   55871 shared_informer.go:232] Caches are synced for namespace 
I0326 05:20:10.327673   55871 shared_informer.go:232] Caches are synced for service account 
I0326 05:20:10.330108   52408 controller.go:606] quota admission added evaluator for: serviceaccounts
Successful: the flag '--client' shows correct client info
(BSuccessful: the flag '--client' correctly has no server version info
(B+++ [0326 05:20:10] Testing kubectl version: verify json output
W0326 05:20:10.592276   55871 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
Successful: --output json has correct client info
(BSuccessful: --output json has correct server info
(B+++ [0326 05:20:10] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I0326 05:20:10.638088   55871 shared_informer.go:232] Caches are synced for TTL 
Successful: --client --output json has correct client info
(BSuccessful: --client --output json has no server info
... skipping 16 lines ...
I0326 05:20:10.929205   55871 shared_informer.go:232] Caches are synced for HPA 
I0326 05:20:10.929859   55871 shared_informer.go:232] Caches are synced for taint 
I0326 05:20:10.930209   55871 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
I0326 05:20:10.930339   55871 node_lifecycle_controller.go:1199] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0326 05:20:10.930925   55871 taint_manager.go:187] Starting NoExecuteTaintManager
I0326 05:20:10.931042   55871 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"5a43c7e2-be9a-4aae-86a1-ce2ac76f22a1", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
E0326 05:20:10.935571   55871 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0326 05:20:10.941348   55871 shared_informer.go:232] Caches are synced for endpoint_slice 
I0326 05:20:10.942147   55871 shared_informer.go:232] Caches are synced for disruption 
I0326 05:20:10.942173   55871 disruption.go:339] Sending events to api server.
E0326 05:20:10.946911   55871 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
I0326 05:20:11.019230   55871 shared_informer.go:232] Caches are synced for attach detach 
I0326 05:20:11.026212   55871 shared_informer.go:232] Caches are synced for expand 
I0326 05:20:11.027474   55871 shared_informer.go:232] Caches are synced for PV protection 
I0326 05:20:11.030975   55871 shared_informer.go:232] Caches are synced for persistent volume 
I0326 05:20:11.036806   55871 shared_informer.go:232] Caches are synced for garbage collector 
I0326 05:20:11.036832   55871 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
... skipping 47 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0326 05:20:13] Creating namespace namespace-1585200013-23242
namespace/namespace-1585200013-23242 created
Context "test" modified.
+++ [0326 05:20:13] Testing RESTMapper
+++ [0326 05:20:13] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 58 lines ...
namespace/namespace-1585200017-26964 created
Context "test" modified.
+++ [0326 05:20:17] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 58 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 25 lines ...
namespace/namespace-1585200024-19568 created
Context "test" modified.
+++ [0326 05:20:24] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:155: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:156: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:157: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 459 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:189: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:197: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:201: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:205: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:209: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:214: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 19 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:258: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:264: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:268: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:274: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 206 lines ...
(Bpod/valid-pod patched
core.sh:517: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
(Bpod/valid-pod patched
core.sh:522: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.2:
(Bpod/valid-pod patched
core.sh:538: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0326 05:20:53] "kubectl patch with resourceVersion 540" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:562: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0326 05:20:54.406801   55871 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
core.sh:586: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:611: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
core.sh:627: Successful get node node-v1-test {{.metadata.annotations.a}}: b
... skipping 26 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:660: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:664: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:668: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:672: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:676: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 83 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0326 05:21:04] Creating namespace namespace-1585200064-6662
namespace/namespace-1585200064-6662 created
Context "test" modified.
+++ [0326 05:21:04] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0326 05:21:04] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 17 lines ...
(Bpod "test-pod" deleted
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0326 05:21:06.975051   52408 client.go:361] parsed scheme: "endpoint"
I0326 05:21:06.975095   52408 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0326 05:21:06.978392   52408 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj serverside-applied (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 12 lines ...
(Bpod "nginx-extensions" deleted
Successful
message:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0326 05:21:08] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 33 lines ...
I0326 05:21:11.528313   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200069-27260", Name:"nginx", UID:"e995c03f-2cdf-4b6c-b906-68afae72e289", APIVersion:"apps/v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9587c59df to 3
I0326 05:21:11.536073   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200069-27260", Name:"nginx-9587c59df", UID:"b25adf4a-e861-4139-b9d0-81994e2a5bc6", APIVersion:"apps/v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9587c59df-sz84t
I0326 05:21:11.539427   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200069-27260", Name:"nginx-9587c59df", UID:"b25adf4a-e861-4139-b9d0-81994e2a5bc6", APIVersion:"apps/v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9587c59df-w5kqh
I0326 05:21:11.541505   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200069-27260", Name:"nginx-9587c59df", UID:"b25adf4a-e861-4139-b9d0-81994e2a5bc6", APIVersion:"apps/v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9587c59df-99mnm
apps.sh:149: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1585200069-27260\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1585200069-27260"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
I0326 05:21:18.504136   55871 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1585200061-29147
deployment.apps/nginx configured
I0326 05:21:21.040189   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200069-27260", Name:"nginx", UID:"28efcf9c-3be7-44ea-ae6f-9913e8726249", APIVersion:"apps/v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6c499547c4 to 3
I0326 05:21:21.043091   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200069-27260", Name:"nginx-6c499547c4", UID:"9b361822-90eb-4510-a6de-9d1cec790c1e", APIVersion:"apps/v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-7tl5f
I0326 05:21:21.048872   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200069-27260", Name:"nginx-6c499547c4", UID:"9b361822-90eb-4510-a6de-9d1cec790c1e", APIVersion:"apps/v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-j2m8q
I0326 05:21:21.049880   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200069-27260", Name:"nginx-6c499547c4", UID:"9b361822-90eb-4510-a6de-9d1cec790c1e", APIVersion:"apps/v1", ResourceVersion:"640", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-km4rh
Successful
message:        "name": "nginx2"
          "name": "nginx2"
has:"name": "nginx2"
E0326 05:21:25.359505   55871 replica_set.go:535] sync "namespace-1585200069-27260/nginx-6c499547c4" failed with Operation cannot be fulfilled on replicasets.apps "nginx-6c499547c4": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1585200069-27260/nginx-6c499547c4, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 9b361822-90eb-4510-a6de-9d1cec790c1e, UID in object meta: 
I0326 05:21:26.341766   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200069-27260", Name:"nginx", UID:"73debd65-14aa-4c4f-9e59-d566308f5f48", APIVersion:"apps/v1", ResourceVersion:"675", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6c499547c4 to 3
I0326 05:21:26.345127   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200069-27260", Name:"nginx-6c499547c4", UID:"6176994c-2d2f-4757-9a6d-42de7a204049", APIVersion:"apps/v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-zxwt7
I0326 05:21:26.351055   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200069-27260", Name:"nginx-6c499547c4", UID:"6176994c-2d2f-4757-9a6d-42de7a204049", APIVersion:"apps/v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-qwzf5
I0326 05:21:26.351292   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200069-27260", Name:"nginx-6c499547c4", UID:"6176994c-2d2f-4757-9a6d-42de7a204049", APIVersion:"apps/v1", ResourceVersion:"676", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-859k5
Successful
message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 211 lines ...
+++ [0326 05:21:29] Creating namespace namespace-1585200089-26936
namespace/namespace-1585200089-26936 created
Context "test" modified.
+++ [0326 05:21:29] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1585200089-26936 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1585200089-26936 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0326 05:21:31.032902   67018 loader.go:375] Config loaded from file:  /tmp/tmp.wm69lL4V8L/.kube/config
I0326 05:21:31.034233   67018 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
I0326 05:21:31.056592   67018 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0326 05:21:31.058137   67018 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 496 lines ...
Successful
message:NAME    DATA   AGE
one     0      0s
three   0      0s
two     0      0s
STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
Successful
message:STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
+++ [0326 05:21:37] Creating namespace namespace-1585200097-3213
namespace/namespace-1585200097-3213 created
Context "test" modified.
get.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
... skipping 104 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2020-03-26T05:21:38Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2020-03-26T05:21:38Z"}}, "name":"valid-pod", "namespace":"namespace-1585200097-3213", "resourceVersion":"724", "selfLink":"/api/v1/namespaces/namespace-1585200097-3213/pods/valid-pod", "uid":"34a08477-717f-4f1b-9821-33945d5c2879"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2020-03-26T05:21:38Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2020-03-26T05:21:38Z"}],"name":"valid-pod","namespace":"namespace-1585200097-3213","resourceVersion":"724","selfLink":"/api/v1/namespaces/namespace-1585200097-3213/pods/valid-pod","uid":"34a08477-717f-4f1b-9821-33945d5c2879"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2020-03-26T05:21:38Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2020-03-26T05:21:38Z]] name:valid-pod namespace:namespace-1585200097-3213 resourceVersion:724 selfLink:/api/v1/namespaces/namespace-1585200097-3213/pods/valid-pod uid:34a08477-717f-4f1b-9821-33945d5c2879] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:STATUS
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:valid-pod
Successful
message:pod/valid-pod
status/<unknown>
has not:STATUS
Successful
... skipping 81 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has not:STATUS
... skipping 78 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 36 lines ...
+++ [0326 05:21:43] Creating namespace namespace-1585200103-15736
namespace/namespace-1585200103-15736 created
Context "test" modified.
+++ [0326 05:21:43] Testing kubectl exec POD COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [0326 05:21:44] Creating namespace namespace-1585200104-13468
namespace/namespace-1585200104-13468 created
Context "test" modified.
+++ [0326 05:21:44] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0326 05:21:44.797028   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200104-13468", Name:"frontend", UID:"afba9f34-c43d-48f1-8c06-7239bc561f87", APIVersion:"apps/v1", ResourceVersion:"785", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fcd5p
I0326 05:21:44.802026   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200104-13468", Name:"frontend", UID:"afba9f34-c43d-48f1-8c06-7239bc561f87", APIVersion:"apps/v1", ResourceVersion:"785", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-r6q4w
I0326 05:21:44.809158   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200104-13468", Name:"frontend", UID:"afba9f34-c43d-48f1-8c06-7239bc561f87", APIVersion:"apps/v1", ResourceVersion:"785", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cz225
configmap/test-set-env-config created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-cz225 does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-cz225 does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"be102e5f-565b-47f2-bff7-4d681866313f","resourceVersion":"805","creationTimestamp":"2020-03-26T05:21:45Z"}}
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"be102e5f-565b-47f2-bff7-4d681866313f","resourceVersion":"806","creationTimestamp":"2020-03-26T05:21:45Z"},"data":{"key1":"config1"}}
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"be102e5f-565b-47f2-bff7-4d681866313f","resourceVersion":"806","creationTimestamp":"2020-03-26T05:21:45Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"be102e5f-565b-47f2-bff7-4d681866313f"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 166 lines ...
valid-pod   0/1     Pending   0          0s
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:Timeout exceeded while reading body
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 240 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [0326 05:21:56] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 300 lines ...
(Bcrd.sh:450: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
+++ [0326 05:22:26] Testing recursive resources
... skipping 2 lines ...
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
W0326 05:22:27.408680   52408 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0326 05:22:27.409890   55871 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0326 05:22:27.410635   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0326 05:22:27.511838   52408 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0326 05:22:27.512911   55871 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0326 05:22:27.513758   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0326 05:22:27.620368   52408 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0326 05:22:27.621457   55871 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0326 05:22:27.622165   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
W0326 05:22:27.724029   52408 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0326 05:22:27.724874   55871 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0326 05:22:27.725552   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:Name:         busybox0
Namespace:    namespace-1585200146-15631
Priority:     0
Node:         <none>
... skipping 159 lines ...
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx created
I0326 05:22:29.119707   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200146-15631", Name:"nginx", UID:"6f82cfee-3522-4998-8921-f5d8df8234bc", APIVersion:"apps/v1", ResourceVersion:"994", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9c6f87b75 to 3
I0326 05:22:29.123689   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200146-15631", Name:"nginx-9c6f87b75", UID:"5b279060-32a4-4b29-a9e0-3d45fe802391", APIVersion:"apps/v1", ResourceVersion:"995", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9c6f87b75-jfhmm
I0326 05:22:29.127904   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200146-15631", Name:"nginx-9c6f87b75", UID:"5b279060-32a4-4b29-a9e0-3d45fe802391", APIVersion:"apps/v1", ResourceVersion:"995", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9c6f87b75-dj7rf
I0326 05:22:29.128066   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200146-15631", Name:"nginx-9c6f87b75", UID:"5b279060-32a4-4b29-a9e0-3d45fe802391", APIVersion:"apps/v1", ResourceVersion:"995", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9c6f87b75-qghgd
... skipping 48 lines ...
deployment.apps "nginx" deleted
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0326 05:22:30.311424   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
E0326 05:22:30.417083   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0326 05:22:30.424854   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0326 05:22:30.612735   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0326 05:22:30.972951   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200146-15631", Name:"busybox0", UID:"d3e4c642-a369-4774-b583-9dd656277b15", APIVersion:"v1", ResourceVersion:"1026", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-6tf6b
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0326 05:22:30.978255   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200146-15631", Name:"busybox1", UID:"f8c72dab-c697-4475-943b-415ea6d4d521", APIVersion:"v1", ResourceVersion:"1028", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-fgpf4
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BI0326 05:22:31.145921   55871 namespace_controller.go:185] Namespace has been deleted non-native-resources
generic-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0326 05:22:32.597135   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200146-15631", Name:"busybox0", UID:"d3e4c642-a369-4774-b583-9dd656277b15", APIVersion:"v1", ResourceVersion:"1050", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-mrpxn
I0326 05:22:32.612631   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200146-15631", Name:"busybox1", UID:"f8c72dab-c697-4475-943b-415ea6d4d521", APIVersion:"v1", ResourceVersion:"1056", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-jn6q9
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
I0326 05:22:33.314965   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200146-15631", Name:"nginx1-deployment", UID:"a6396e54-f8f9-4b9f-9109-41fda8385e1d", APIVersion:"apps/v1", ResourceVersion:"1072", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-866c6857d5 to 2
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0326 05:22:33.320706   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200146-15631", Name:"nginx1-deployment-866c6857d5", UID:"baf65957-6f13-4ca5-9206-6487f8fb6e71", APIVersion:"apps/v1", ResourceVersion:"1073", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-866c6857d5-9p9lj
I0326 05:22:33.320823   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200146-15631", Name:"nginx0-deployment", UID:"c89b549c-3f02-474b-885a-5e723b4de1c4", APIVersion:"apps/v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-ff7db88b6 to 2
I0326 05:22:33.326241   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200146-15631", Name:"nginx1-deployment-866c6857d5", UID:"baf65957-6f13-4ca5-9206-6487f8fb6e71", APIVersion:"apps/v1", ResourceVersion:"1073", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-866c6857d5-xbg99
I0326 05:22:33.327410   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200146-15631", Name:"nginx0-deployment-ff7db88b6", UID:"8f5125cf-8230-45ea-8578-7092ad284bbf", APIVersion:"apps/v1", ResourceVersion:"1078", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-ff7db88b6-skvzx
I0326 05:22:33.332261   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200146-15631", Name:"nginx0-deployment-ff7db88b6", UID:"8f5125cf-8230-45ea-8578-7092ad284bbf", APIVersion:"apps/v1", ResourceVersion:"1078", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-ff7db88b6-bttlh
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
E0326 05:22:34.285832   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0326 05:22:34.505936   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0326 05:22:35.271649   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0326 05:22:35.515213   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200146-15631", Name:"busybox0", UID:"8eddadb8-887a-415a-9ca8-1df1090cbcfe", APIVersion:"v1", ResourceVersion:"1123", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-cc5wb
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0326 05:22:35.520933   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200146-15631", Name:"busybox1", UID:"6ced9f39-9177-48d2-b8ec-ae6a99a56663", APIVersion:"v1", ResourceVersion:"1125", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-4dqmn
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:no rollbacker has been implemented for "ReplicationController"
Successful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
E0326 05:22:35.768310   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0326 05:22:37] Testing kubectl(v1:namespaces)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1413: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bnamespace "my-namespace" deleted
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1422: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
... skipping 28 lines ...
namespace "namespace-1585200107-2738" deleted
namespace "namespace-1585200107-28623" deleted
namespace "namespace-1585200108-25753" deleted
namespace "namespace-1585200110-30960" deleted
namespace "namespace-1585200112-23218" deleted
namespace "namespace-1585200146-15631" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1585200011-32603" deleted
... skipping 26 lines ...
namespace "namespace-1585200107-2738" deleted
namespace "namespace-1585200107-28623" deleted
namespace "namespace-1585200108-25753" deleted
namespace "namespace-1585200110-30960" deleted
namespace "namespace-1585200112-23218" deleted
namespace "namespace-1585200146-15631" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1429: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1430: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
resourcequota/test-quota created (server dry run)
core.sh:1434: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created
core.sh:1437: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: found:
(BI0326 05:22:43.963994   55871 resource_quota_controller.go:306] Resource quota has been deleted quotas/test-quota
resourcequota "test-quota" deleted
namespace "quotas" deleted
E0326 05:22:44.302621   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0326 05:22:44.669023   55871 shared_informer.go:225] Waiting for caches to sync for resource quota
I0326 05:22:44.669076   55871 shared_informer.go:232] Caches are synced for resource quota 
E0326 05:22:45.046767   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0326 05:22:45.190592   55871 shared_informer.go:225] Waiting for caches to sync for garbage collector
I0326 05:22:45.190648   55871 shared_informer.go:232] Caches are synced for garbage collector 
E0326 05:22:45.627212   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0326 05:22:46.397131   55871 horizontal.go:354] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1585200146-15631
I0326 05:22:46.400252   55871 horizontal.go:354] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1585200146-15631
E0326 05:22:47.399212   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1449: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1453: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1457: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1461: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1463: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1470: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1474: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 108 lines ...
(Bcore.sh:872: Successful get secret/secret-string-data --namespace=test-secrets  {{.data}}: map[k1:djE= k2:djI=]
(Bcore.sh:873: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:882: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
E0326 05:22:59.937577   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0326 05:23:00.252979   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0326 05:23:00.331521   55871 namespace_controller.go:185] Namespace has been deleted other
E0326 05:23:01.135896   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 17 lines ...
configmap/test-binary-configmap created
core.sh:51: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
(Bcore.sh:52: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
(Bconfigmap "test-configmap" deleted
configmap "test-binary-configmap" deleted
namespace "test-configmaps" deleted
E0326 05:23:07.633096   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0326 05:23:08.672085   55871 namespace_controller.go:185] Namespace has been deleted test-secrets
+++ exit code: 0
Recording: run_client_config_tests
Running command: run_client_config_tests

+++ Running case: test-cmd.run_client_config_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_client_config_tests
+++ [0326 05:23:12] Creating namespace namespace-1585200192-14624
namespace/namespace-1585200192-14624 created
Context "test" modified.
+++ [0326 05:23:12] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 43 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 38 lines ...
                job-name=test-job
Annotations:    cronjob.kubernetes.io/instantiate: manual
Controlled By:  CronJob/pi
Parallelism:    1
Completions:    1
Start Time:     Thu, 26 Mar 2020 05:23:20 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=c7ce3ff2-7113-4505-8449-8955e52afdf8
           job-name=test-job
  Containers:
   pi:
    Image:      k8s.gcr.io/perl
... skipping 446 lines ...
  selector:
    role: padawan
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:980: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:993: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:1000: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(BI0326 05:23:31.500582   55871 namespace_controller.go:185] Namespace has been deleted test-jobs
core.sh:1004: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
... skipping 117 lines ...
(Bapps.sh:85: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:86: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:95: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
E0326 05:23:39.659206   55871 daemon_controller.go:292] namespace-1585200217-17007/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1585200217-17007", SelfLink:"/apis/apps/v1/namespaces/namespace-1585200217-17007/daemonsets/bind", UID:"c9516826-5df2-469d-bc2c-b59509babdf5", ResourceVersion:"1638", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720797017, loc:(*time.Location)(0x6d06380)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1585200217-17007\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001743960), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001743980)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0017439a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0017439c0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0017439e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003456998), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0003c6070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001743a00), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000bd01c8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0034569ec)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:99: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:100: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_rc_tests
... skipping 32 lines ...
Namespace:    namespace-1585200220-9737
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1585200220-9737
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1585200220-9737
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1585200220-9737
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1585200220-9737
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1585200220-9737
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1585200220-9737
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1585200220-9737
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 15 lines ...
(Bcore.sh:1178: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0326 05:23:41.998120   55871 replica_set.go:200] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1585200220-9737 /api/v1/namespaces/namespace-1585200220-9737/replicationcontrollers/frontend 53980404-c570-466a-b7cb-2424d8ad4342 1673 2 2020-03-26 05:23:40 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  [{kube-controller-manager Update v1 2020-03-26 05:23:40 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}} {kubectl Update v1 2020-03-26 05:23:40 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:replicas":{},"f:selector":{".":{},"f:app":{},"f:tier":{}},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003456788 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0326 05:23:42.004177   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200220-9737", Name:"frontend", UID:"53980404-c570-466a-b7cb-2424d8ad4342", APIVersion:"v1", ResourceVersion:"1673", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-xrrp4
core.sh:1182: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1186: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1190: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1194: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0326 05:23:42.518348   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200220-9737", Name:"frontend", UID:"53980404-c570-466a-b7cb-2424d8ad4342", APIVersion:"v1", ResourceVersion:"1679", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nfft8
core.sh:1198: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1202: Successful get rc frontend {{.spec.replicas}}: 3
... skipping 15 lines ...
I0326 05:23:43.373054   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200220-9737", Name:"redis-slave", UID:"47043924-9cdd-4b39-8a6c-8c32eb0af07f", APIVersion:"v1", ResourceVersion:"1711", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-5gkpv
I0326 05:23:43.378221   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200220-9737", Name:"redis-slave", UID:"47043924-9cdd-4b39-8a6c-8c32eb0af07f", APIVersion:"v1", ResourceVersion:"1711", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-55qrf
core.sh:1216: Successful get rc redis-master {{.spec.replicas}}: 4
(Bcore.sh:1217: Successful get rc redis-slave {{.spec.replicas}}: 4
(Breplicationcontroller "redis-master" deleted
replicationcontroller "redis-slave" deleted
E0326 05:23:43.671764   55871 replica_set.go:535] sync "namespace-1585200220-9737/redis-slave" failed with Operation cannot be fulfilled on replicationcontrollers "redis-slave": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1585200220-9737/redis-slave, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 47043924-9cdd-4b39-8a6c-8c32eb0af07f, UID in object meta: 
deployment.apps/nginx-deployment created
I0326 05:23:43.803532   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment", UID:"2aea1a12-74fd-4cb7-8189-936e84f62568", APIVersion:"apps/v1", ResourceVersion:"1743", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6866878c7b to 3
I0326 05:23:43.807354   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-6866878c7b", UID:"794b259e-6b9d-4cc5-bc14-b58382fc5e34", APIVersion:"apps/v1", ResourceVersion:"1744", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6866878c7b-5fswd
I0326 05:23:43.810889   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-6866878c7b", UID:"794b259e-6b9d-4cc5-bc14-b58382fc5e34", APIVersion:"apps/v1", ResourceVersion:"1744", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6866878c7b-vvsrx
I0326 05:23:43.811144   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-6866878c7b", UID:"794b259e-6b9d-4cc5-bc14-b58382fc5e34", APIVersion:"apps/v1", ResourceVersion:"1744", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6866878c7b-fhtg7
deployment.apps/nginx-deployment scaled
... skipping 4 lines ...
(Bdeployment.apps "nginx-deployment" deleted
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0326 05:23:44.477508   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment", UID:"1dcb75de-40b6-41ce-a22f-fa8a60516644", APIVersion:"apps/v1", ResourceVersion:"1783", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6866878c7b to 3
I0326 05:23:44.482269   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-6866878c7b", UID:"6284f959-f66c-4585-968a-360ef0fe5f90", APIVersion:"apps/v1", ResourceVersion:"1784", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6866878c7b-m8fjm
I0326 05:23:44.486148   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-6866878c7b", UID:"6284f959-f66c-4585-968a-360ef0fe5f90", APIVersion:"apps/v1", ResourceVersion:"1784", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6866878c7b-shnfz
... skipping 7 lines ...
I0326 05:23:44.990852   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200220-9737", Name:"frontend", UID:"ea4b5bb1-4bdc-4516-9c0f-a06ff76ed35b", APIVersion:"v1", ResourceVersion:"1816", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-4tdw5
I0326 05:23:44.994129   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200220-9737", Name:"frontend", UID:"ea4b5bb1-4bdc-4516-9c0f-a06ff76ed35b", APIVersion:"v1", ResourceVersion:"1816", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5hldf
I0326 05:23:44.994174   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200220-9737", Name:"frontend", UID:"ea4b5bb1-4bdc-4516-9c0f-a06ff76ed35b", APIVersion:"v1", ResourceVersion:"1816", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-64bd8
core.sh:1256: Successful get rc frontend {{.spec.replicas}}: 3
(Bservice/frontend exposed
core.sh:1260: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BE0326 05:23:45.280420   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/frontend-2 exposed
core.sh:1264: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 443
(Bpod/valid-pod created
service/frontend-3 exposed
core.sh:1269: Successful get service frontend-3 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 444
(Bservice/frontend-4 exposed
... skipping 4 lines ...
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
service "frontend-5" deleted
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
Successful
message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 4 lines ...
has:etcd-server exposed
core.sh:1307: Successful get service etcd-server {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: port-1 2380
(Bcore.sh:1308: Successful get service etcd-server {{(index .spec.ports 1).name}} {{(index .spec.ports 1).port}}: port-2 2379
(Bservice "etcd-server" deleted
core.sh:1314: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(Breplicationcontroller "frontend" deleted
E0326 05:23:47.297789   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1318: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:1322: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/frontend created
I0326 05:23:47.629528   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200220-9737", Name:"frontend", UID:"08a6c81e-d6ed-4525-b40e-6e14bbf60edd", APIVersion:"v1", ResourceVersion:"1894", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nh497
I0326 05:23:47.632933   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200220-9737", Name:"frontend", UID:"08a6c81e-d6ed-4525-b40e-6e14bbf60edd", APIVersion:"v1", ResourceVersion:"1894", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-t5vpx
I0326 05:23:47.636551   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200220-9737", Name:"frontend", UID:"08a6c81e-d6ed-4525-b40e-6e14bbf60edd", APIVersion:"v1", ResourceVersion:"1894", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-s6xrg
... skipping 7 lines ...
core.sh:1335: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:1339: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/frontend created
I0326 05:23:48.421877   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200220-9737", Name:"frontend", UID:"71e65d64-463d-4aab-a673-6ce9b8d01da2", APIVersion:"v1", ResourceVersion:"1922", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7czzl
I0326 05:23:48.425138   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200220-9737", Name:"frontend", UID:"71e65d64-463d-4aab-a673-6ce9b8d01da2", APIVersion:"v1", ResourceVersion:"1922", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6jj9m
I0326 05:23:48.427142   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200220-9737", Name:"frontend", UID:"71e65d64-463d-4aab-a673-6ce9b8d01da2", APIVersion:"v1", ResourceVersion:"1922", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6sn74
E0326 05:23:48.440810   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1342: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: frontend:
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1345: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1349: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1358: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 24 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0326 05:23:49.542976   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-resources", UID:"b694ef47-a9e9-4cab-81b6-a9212b2b4dba", APIVersion:"apps/v1", ResourceVersion:"1944", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-79666b9cd9 to 3
I0326 05:23:49.548047   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-resources-79666b9cd9", UID:"c343a76a-6a83-4087-bf39-d6b05ce26326", APIVersion:"apps/v1", ResourceVersion:"1945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79666b9cd9-6tfn5
I0326 05:23:49.552247   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-resources-79666b9cd9", UID:"c343a76a-6a83-4087-bf39-d6b05ce26326", APIVersion:"apps/v1", ResourceVersion:"1945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79666b9cd9-ffjg9
I0326 05:23:49.553050   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-resources-79666b9cd9", UID:"c343a76a-6a83-4087-bf39-d6b05ce26326", APIVersion:"apps/v1", ResourceVersion:"1945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79666b9cd9-7r7f2
core.sh:1364: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0326 05:23:49.908920   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-resources", UID:"b694ef47-a9e9-4cab-81b6-a9212b2b4dba", APIVersion:"apps/v1", ResourceVersion:"1958", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-8b888884f to 1
I0326 05:23:49.912618   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-resources-8b888884f", UID:"8490aa2f-56f9-43dd-88da-673e0e73c76f", APIVersion:"apps/v1", ResourceVersion:"1959", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-8b888884f-774k5
core.sh:1369: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0326 05:23:50.283702   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-resources", UID:"b694ef47-a9e9-4cab-81b6-a9212b2b4dba", APIVersion:"apps/v1", ResourceVersion:"1968", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-79666b9cd9 to 2
I0326 05:23:50.290515   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-resources-79666b9cd9", UID:"c343a76a-6a83-4087-bf39-d6b05ce26326", APIVersion:"apps/v1", ResourceVersion:"1972", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-79666b9cd9-6tfn5
I0326 05:23:50.292599   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-resources", UID:"b694ef47-a9e9-4cab-81b6-a9212b2b4dba", APIVersion:"apps/v1", ResourceVersion:"1971", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-76f48f979f to 1
I0326 05:23:50.298964   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200220-9737", Name:"nginx-deployment-resources-76f48f979f", UID:"aea96d17-394e-44cc-a2fc-106df9bc3a2e", APIVersion:"apps/v1", ResourceVersion:"1976", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-76f48f979f-p2tsd
core.sh:1375: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 363 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1386: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1387: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1388: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 47 lines ...
                pod-template-hash=c9cc54d87
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=c9cc54d87
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 98 lines ...
(BWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
deployment.apps/nginx configured
I0326 05:23:57.044212   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200231-6341", Name:"nginx", UID:"d3ea8f45-7260-4c51-93f5-0c158ebf5bb7", APIVersion:"apps/v1", ResourceVersion:"2167", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-697546885c to 1
I0326 05:23:57.050597   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200231-6341", Name:"nginx-697546885c", UID:"4ee66a53-24dd-4681-9031-7d20726b6ccc", APIVersion:"apps/v1", ResourceVersion:"2168", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-697546885c-ndgvm
apps.sh:301: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(B    Image:	k8s.gcr.io/nginx:test-cmd
E0326 05:23:57.340963   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx rolled back (server dry run)
apps.sh:305: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
apps.sh:309: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:312: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
apps.sh:316: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0326 05:24:01.048860   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200231-6341", Name:"nginx", UID:"d3ea8f45-7260-4c51-93f5-0c158ebf5bb7", APIVersion:"apps/v1", ResourceVersion:"2199", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-9c6f87b75 to 2
I0326 05:24:01.055191   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200231-6341", Name:"nginx-9c6f87b75", UID:"80b911eb-6b5a-4d67-a331-5486b8e0b5a3", APIVersion:"apps/v1", ResourceVersion:"2203", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-9c6f87b75-pxl58
I0326 05:24:01.059942   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200231-6341", Name:"nginx", UID:"d3ea8f45-7260-4c51-93f5-0c158ebf5bb7", APIVersion:"apps/v1", ResourceVersion:"2202", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-58f898c89 to 1
I0326 05:24:01.063441   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200231-6341", Name:"nginx-58f898c89", UID:"d130ea8e-9928-4675-a451-539a3d9d08da", APIVersion:"apps/v1", ResourceVersion:"2207", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-58f898c89-btc4s
Successful
... skipping 147 lines ...
(BI0326 05:24:03.589944   55871 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1585200220-9737
deployment.apps/nginx-deployment image updated
I0326 05:24:03.623907   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200231-6341", Name:"nginx-deployment", UID:"05801997-d7cf-441a-ae0f-ac84fbc9cf3c", APIVersion:"apps/v1", ResourceVersion:"2269", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6d5f69bf98 to 1
I0326 05:24:03.629740   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200231-6341", Name:"nginx-deployment-6d5f69bf98", UID:"3a96ad9d-b8d4-43f6-900c-16aeb25a8a25", APIVersion:"apps/v1", ResourceVersion:"2270", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6d5f69bf98-k2gdg
apps.sh:359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:360: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:369: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
... skipping 50 lines ...
I0326 05:24:07.444291   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200231-6341", Name:"nginx-deployment", UID:"98521727-c904-4fdf-b326-b33fefcc41f1", APIVersion:"apps/v1", ResourceVersion:"2413", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-75bb56f9c to 1
deployment.apps/nginx-deployment env updated
I0326 05:24:07.546596   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200231-6341", Name:"nginx-deployment-75bb56f9c", UID:"bbe52fc4-ba78-4eb9-a8db-abb936f0b3c1", APIVersion:"apps/v1", ResourceVersion:"2420", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-75bb56f9c-v8r4s
deployment.apps/nginx-deployment env updated
deployment.apps "nginx-deployment" deleted
configmap "test-set-env-config" deleted
E0326 05:24:07.795318   55871 replica_set.go:535] sync "namespace-1585200231-6341/nginx-deployment-5d757cf5f8" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5d757cf5f8": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1585200231-6341/nginx-deployment-5d757cf5f8, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 657532c3-8ad2-4903-b244-e2a495c41214, UID in object meta: 
secret "test-set-env-secret" deleted
+++ exit code: 0
Recording: run_rs_tests
Running command: run_rs_tests
E0326 05:24:07.895367   55871 replica_set.go:535] sync "namespace-1585200231-6341/nginx-deployment-8486fbf9cc" failed with replicasets.apps "nginx-deployment-8486fbf9cc" not found

+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
+++ [0326 05:24:07] Creating namespace namespace-1585200247-8813
E0326 05:24:07.944839   55871 replica_set.go:535] sync "namespace-1585200231-6341/nginx-deployment-75bb56f9c" failed with replicasets.apps "nginx-deployment-75bb56f9c" not found
namespace/namespace-1585200247-8813 created
Context "test" modified.
+++ [0326 05:24:08] Testing kubectl(v1:replicasets)
apps.sh:533: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0326 05:24:08.303987   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200247-8813", Name:"frontend", UID:"394a42cf-6c2a-4f55-9489-654206834f1e", APIVersion:"apps/v1", ResourceVersion:"2449", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cb2x9
I0326 05:24:08.307820   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200247-8813", Name:"frontend", UID:"394a42cf-6c2a-4f55-9489-654206834f1e", APIVersion:"apps/v1", ResourceVersion:"2449", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-j55mr
I0326 05:24:08.309626   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200247-8813", Name:"frontend", UID:"394a42cf-6c2a-4f55-9489-654206834f1e", APIVersion:"apps/v1", ResourceVersion:"2449", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mxssb
+++ [0326 05:24:08] Deleting rs
replicaset.apps "frontend" deleted
E0326 05:24:08.395264   55871 replica_set.go:535] sync "namespace-1585200247-8813/frontend" failed with replicasets.apps "frontend" not found
apps.sh:539: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:543: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0326 05:24:08.742790   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200247-8813", Name:"frontend", UID:"770d2662-5d04-4b33-9ff4-76534a7240a8", APIVersion:"apps/v1", ResourceVersion:"2464", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-p6t59
I0326 05:24:08.746540   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200247-8813", Name:"frontend", UID:"770d2662-5d04-4b33-9ff4-76534a7240a8", APIVersion:"apps/v1", ResourceVersion:"2464", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5dmgx
I0326 05:24:08.747167   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200247-8813", Name:"frontend", UID:"770d2662-5d04-4b33-9ff4-76534a7240a8", APIVersion:"apps/v1", ResourceVersion:"2464", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zhr55
apps.sh:547: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0326 05:24:08] Deleting rs
replicaset.apps "frontend" deleted
E0326 05:24:08.994634   55871 replica_set.go:535] sync "namespace-1585200247-8813/frontend" failed with replicasets.apps "frontend" not found
apps.sh:551: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:553: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-5dmgx" deleted
pod "frontend-p6t59" deleted
pod "frontend-zhr55" deleted
apps.sh:556: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
Namespace:    namespace-1585200247-8813
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1585200247-8813
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1585200247-8813
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1585200247-8813
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1585200247-8813
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1585200247-8813
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1585200247-8813
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1585200247-8813
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 134 lines ...
(Bdeployment.apps/scale-1 scaled
I0326 05:24:12.075419   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200247-8813", Name:"scale-1", UID:"53523b9c-b1f2-4b26-a4a0-2d0f1446879b", APIVersion:"apps/v1", ResourceVersion:"2553", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-1-5ff9767d8c to 3
deployment.apps/scale-2 scaled
I0326 05:24:12.081800   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200247-8813", Name:"scale-2", UID:"6b60713d-effb-4caf-9086-25300bd245cc", APIVersion:"apps/v1", ResourceVersion:"2555", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-2-5ff9767d8c to 3
deployment.apps/scale-3 scaled
I0326 05:24:12.089347   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200247-8813", Name:"scale-3", UID:"c3ad47be-4b38-4974-b717-d7874d439a6a", APIVersion:"apps/v1", ResourceVersion:"2559", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set scale-3-5ff9767d8c to 3
E0326 05:24:12.096281   55871 replica_set.go:535] sync "namespace-1585200247-8813/scale-2-5ff9767d8c" failed with Operation cannot be fulfilled on replicasets.apps "scale-2-5ff9767d8c": the object has been modified; please apply your changes to the latest version and try again
I0326 05:24:12.146338   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200247-8813", Name:"scale-1-5ff9767d8c", UID:"c39add58-2ff5-4ef7-b2ed-24e6e26f9a67", APIVersion:"apps/v1", ResourceVersion:"2554", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-1-5ff9767d8c-hr544
apps.sh:606: Successful get deploy scale-1 {{.spec.replicas}}: 3
(BI0326 05:24:12.196567   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200247-8813", Name:"scale-3-5ff9767d8c", UID:"41564c02-4dcf-4077-b40a-28a9b14abcba", APIVersion:"apps/v1", ResourceVersion:"2563", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5ff9767d8c-bsg48
I0326 05:24:12.247089   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200247-8813", Name:"scale-2-5ff9767d8c", UID:"c15b93ea-c409-4843-846f-9d5fd59866d1", APIVersion:"apps/v1", ResourceVersion:"2558", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-2-5ff9767d8c-pwhmh
apps.sh:607: Successful get deploy scale-2 {{.spec.replicas}}: 3
(Bapps.sh:608: Successful get deploy scale-3 {{.spec.replicas}}: 3
(BI0326 05:24:12.396924   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200247-8813", Name:"scale-3-5ff9767d8c", UID:"41564c02-4dcf-4077-b40a-28a9b14abcba", APIVersion:"apps/v1", ResourceVersion:"2563", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: scale-3-5ff9767d8c-2wn4s
replicaset.apps "frontend" deleted
deployment.apps "scale-1" deleted
deployment.apps "scale-2" deleted
deployment.apps "scale-3" deleted
replicaset.apps/frontend created
E0326 05:24:12.698957   55871 replica_set.go:535] sync "namespace-1585200247-8813/scale-1-5ff9767d8c" failed with replicasets.apps "scale-1-5ff9767d8c" not found
apps.sh:616: Successful get rs frontend {{.spec.replicas}}: 3
(BI0326 05:24:12.846534   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200247-8813", Name:"frontend", UID:"a8cfce86-b4b2-4bd5-a583-cf75ca36f591", APIVersion:"apps/v1", ResourceVersion:"2587", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-wcx2d
service/frontend exposed
E0326 05:24:12.944760   55871 replica_set.go:535] sync "namespace-1585200247-8813/scale-3-5ff9767d8c" failed with replicasets.apps "scale-3-5ff9767d8c" not found
apps.sh:620: Successful get service frontend {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BI0326 05:24:12.996281   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200247-8813", Name:"frontend", UID:"a8cfce86-b4b2-4bd5-a583-cf75ca36f591", APIVersion:"apps/v1", ResourceVersion:"2587", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-vpj4g
I0326 05:24:13.046658   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200247-8813", Name:"frontend", UID:"a8cfce86-b4b2-4bd5-a583-cf75ca36f591", APIVersion:"apps/v1", ResourceVersion:"2587", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-npfvx
service/frontend-2 exposed
apps.sh:624: Successful get service frontend-2 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: default 80
(Bservice "frontend" deleted
... skipping 38 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:680: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:684: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 61 lines ...
(Bapps.sh:458: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:459: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps/nginx rolled back
apps.sh:462: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:463: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:467: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:468: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:471: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(Bapps.sh:472: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
Name:         mock
Namespace:    namespace-1585200261-869
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 56 lines ...
Name:         mock
Namespace:    namespace-1585200261-869
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 56 lines ...
Name:         mock
Namespace:    namespace-1585200261-869
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 41 lines ...
Namespace:    namespace-1585200261-869
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1585200261-869
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 16 lines ...
(Breplicationcontroller/mock edited
replicationcontroller/mock2 edited
generic-resources.sh:120: Successful get rc mock {{.metadata.labels.status}}: edited
(Bgeneric-resources.sh:122: Successful get rc mock2 {{.metadata.labels.status}}: edited
(Breplicationcontroller/mock labeled
replicationcontroller/mock2 labeled
E0326 05:24:29.901466   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:140: Successful get rc mock {{.metadata.labels.labeled}}: true
(Bgeneric-resources.sh:142: Successful get rc mock2 {{.metadata.labels.labeled}}: true
(Breplicationcontroller/mock annotated
replicationcontroller/mock2 annotated
generic-resources.sh:159: Successful get rc mock {{.metadata.annotations.annotated}}: true
(Bgeneric-resources.sh:161: Successful get rc mock2 {{.metadata.annotations.annotated}}: true
... skipping 76 lines ...
+++ [0326 05:24:33] Creating namespace namespace-1585200273-5970
namespace/namespace-1585200273-5970 created
Context "test" modified.
+++ [0326 05:24:33] Testing persistent volumes
storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0326 05:24:34.085427   55871 pv_protection_controller.go:118] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(Bpersistentvolume "pv0001" deleted
persistentvolume/pv0002 created
storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
(Bpersistentvolume "pv0002" deleted
persistentvolume/pv0003 created
E0326 05:24:34.754164   55871 pv_protection_controller.go:118] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
(Bpersistentvolume "pv0003" deleted
storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0326 05:24:35.168293   55871 pv_protection_controller.go:118] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:45: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
persistentvolume "pv0001" deleted
has:warning: deleting cluster-scoped resources
Successful
... skipping 440 lines ...
Events:              <none>
(Bcore.sh:1509: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode/127.0.0.1 patched
core.sh:1512: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: true
(Bnode/127.0.0.1 patched
core.sh:1515: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(BE0326 05:24:39.165286   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
tokenreview.authentication.k8s.io/<unknown> created
tokenreview.authentication.k8s.io/<unknown> created
+++ exit code: 0
Recording: run_authorization_tests
Running command: run_authorization_tests

... skipping 79 lines ...
Successful
message:yes
has:yes
Successful
message:yes
has:yes
E0326 05:24:39.842910   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Warning: the server doesn't have a resource type 'invalid_resource'
yes
has:the server doesn't have a resource type
Successful
message:yes
has:yes
Successful
message:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
Successful
message:yes
0
has:0
... skipping 59 lines ...
		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
legacy-script.sh:821: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
(Blegacy-script.sh:822: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
(Blegacy-script.sh:823: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
(Blegacy-script.sh:824: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
(BSuccessful
message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
has:only rbac.authorization.k8s.io/v1 is supported
rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
role.rbac.authorization.k8s.io "testing-R" deleted
warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
... skipping 20 lines ...
replicationcontroller/cassandra created
I0326 05:24:42.639393   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200282-16448", Name:"cassandra", UID:"44894d0c-b190-49f1-923a-710e9ff3678b", APIVersion:"v1", ResourceVersion:"3082", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-px7qq
I0326 05:24:42.644150   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200282-16448", Name:"cassandra", UID:"44894d0c-b190-49f1-923a-710e9ff3678b", APIVersion:"v1", ResourceVersion:"3082", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-kkf4q
service/cassandra created
Waiting for Get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}} : expected: cassandra:cassandra:cassandra:cassandra::, got: cassandra:cassandra:cassandra:cassandra:

discovery.sh:91: FAIL!
Get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}
  Expected: cassandra:cassandra:cassandra:cassandra::
  Got:      cassandra:cassandra:cassandra:cassandra:
(B
55 /home/prow/go/src/k8s.io/kubernetes/hack/lib/test.sh
(B
discovery.sh:92: Successful get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}: cassandra:cassandra:cassandra:cassandra:
(Bpod "cassandra-kkf4q" deleted
I0326 05:24:43.141948   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200282-16448", Name:"cassandra", UID:"44894d0c-b190-49f1-923a-710e9ff3678b", APIVersion:"v1", ResourceVersion:"3088", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-tbnbm
pod "cassandra-px7qq" deleted
I0326 05:24:43.150295   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1585200282-16448", Name:"cassandra", UID:"44894d0c-b190-49f1-923a-710e9ff3678b", APIVersion:"v1", ResourceVersion:"3088", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-hnrxm
replicationcontroller "cassandra" deleted
E0326 05:24:43.156202   55871 replica_set.go:535] sync "namespace-1585200282-16448/cassandra" failed with replicationcontrollers "cassandra" not found
service "cassandra" deleted
+++ exit code: 0
Recording: run_kubectl_explain_tests
Running command: run_kubectl_explain_tests

+++ Running case: test-cmd.run_kubectl_explain_tests 
... skipping 176 lines ...
(BSuccessful
message:sorted-pod1:sorted-pod2:sorted-pod3:
has:sorted-pod1:sorted-pod2:sorted-pod3:
Successful
message:sorted-pod3:sorted-pod2:sorted-pod1:
has:sorted-pod3:sorted-pod2:sorted-pod1:
E0326 05:24:46.198557   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:sorted-pod2:sorted-pod1:sorted-pod3:
has:sorted-pod2:sorted-pod1:sorted-pod3:
Successful
message:sorted-pod1:sorted-pod2:sorted-pod3:
has:sorted-pod1:sorted-pod2:sorted-pod3:
... skipping 763 lines ...
    }
}
certificate.sh:57: Successful get csr/foo {{range.status.conditions}}{{.type}}{{end}}: Denied
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificate.sh:59: Successful get csr {{range.items}}{{.metadata.name}}{{end}}: 
(B+++ exit code: 0
E0326 05:25:06.580133   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Recording: run_cluster_management_tests
Running command: run_cluster_management_tests

+++ Running case: test-cmd.run_cluster_management_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_cluster_management_tests
... skipping 50 lines ...
message:node/127.0.0.1 already uncordoned (server dry run)
has:already uncordoned
node-management.sh:142: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode/127.0.0.1 labeled
node-management.sh:147: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(BSuccessful
message:error: cannot specify both a node name and a --selector option
See 'kubectl drain -h' for help and examples
has:cannot specify both a node name
Successful
message:error: USAGE: cordon NODE [flags]
See 'kubectl cordon -h' for help and examples
has:error\: USAGE\: cordon NODE
node/127.0.0.1 already uncordoned
Successful
message:error: You must provide one or more resources by argument or filename.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
   '<resource> <name>'
   '<resource>'
has:must provide one or more resources
... skipping 14 lines ...
+++ [0326 05:25:11] Testing kubectl plugins
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/version/kubectl-version
  - warning: kubectl-version overwrites existing command: "kubectl version"
error: one plugin warning was found
has:kubectl-version overwrites existing command: "kubectl version"
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
  - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
error: one plugin warning was found
has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
has:plugins are available
Successful
message:Unable read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
error: unable to find any kubectl plugins in your PATH
has:unable to find any kubectl plugins in your PATH
Successful
message:I am plugin foo
has:plugin foo
Successful
message:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1
... skipping 10 lines ...

+++ Running case: test-cmd.run_impersonation_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_impersonation_tests
+++ [0326 05:25:12] Testing impersonation
Successful
message:error: requesting groups or user-extra for  without impersonating a user
has:without impersonating a user
certificatesigningrequest.certificates.k8s.io/foo created
authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
(Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificatesigningrequest.certificates.k8s.io/foo created
... skipping 11 lines ...
+++ [0326 05:25:13] Creating namespace namespace-1585200313-24544
namespace/namespace-1585200313-24544 created
Context "test" modified.
deployment.apps/test-1 created
I0326 05:25:13.446053   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200313-24544", Name:"test-1", UID:"72b8c9da-4626-4fb8-b077-fe692e3efcfa", APIVersion:"apps/v1", ResourceVersion:"3275", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-1-76c6c9b9f7 to 1
I0326 05:25:13.453570   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200313-24544", Name:"test-1-76c6c9b9f7", UID:"4f166466-dd61-4dc4-9d9f-ffa89f596926", APIVersion:"apps/v1", ResourceVersion:"3276", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-1-76c6c9b9f7-4bxk9
E0326 05:25:13.494603   55871 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/test-2 created
I0326 05:25:13.527869   55871 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1585200313-24544", Name:"test-2", UID:"a459a439-cb8c-477f-b794-ce844c3beaea", APIVersion:"apps/v1", ResourceVersion:"3285", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-2-66cd8f4d58 to 1
I0326 05:25:13.531502   55871 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1585200313-24544", Name:"test-2-66cd8f4d58", UID:"22ccb454-8fb0-4ac3-8055-afc1ac1e425a", APIVersion:"apps/v1", ResourceVersion:"3286", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-2-66cd8f4d58-vv55l
wait.sh:36: Successful get deployments {{range .items}}{{.metadata.name}},{{end}}: test-1,test-2,
(Bdeployment.apps "test-1" deleted
deployment.apps "test-2" deleted
... skipping 30 lines ...
I0326 05:25:15.976732   52408 establishing_controller.go:87] Shutting down EstablishingController
I0326 05:25:15.976748   52408 naming_controller.go:302] Shutting down NamingConditionController
I0326 05:25:15.976763   52408 customresource_discovery_controller.go:245] Shutting down DiscoveryController
I0326 05:25:15.976776   52408 crd_finalizer.go:278] Shutting down CRDFinalizer
I0326 05:25:15.977071   52408 secure_serving.go:222] Stopped listening on 127.0.0.1:6443
I0326 05:25:15.977384   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.977509   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.977536   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.977512   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.977631   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.977773   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.977803   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.977831   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.977838   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.977846   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.977901   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.977944   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.977977   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.977985   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.978083   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.978096   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.978157   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.978176   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.978221   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.978375   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.978376   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.978419   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.978522   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.978538   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.978581   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.978614   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.978630   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.978642   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.978652   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.978733   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.978733   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.978761   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.978818   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.978823   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.978849   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.978854   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.978864   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.978539   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.979134   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.979283   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.979317   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.979341   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.979380   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.979388   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.979448   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.979515   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.979561   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.979580   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.979634   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.979672   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.979731   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.979874   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.979903   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.979909   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.979959   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.979973   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980007   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980025   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.980043   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.980069   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980075   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980086   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980126   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980142   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980143   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980177   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980205   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980224   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980235   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.980242   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.980265   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980273   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980281   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980294   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980178   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980321   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980235   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.980143   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.980350   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.980422   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.980449   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.980455   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.980451   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.980503   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.980549   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.980554   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.980651   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.980657   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.980671   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.980707   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.980550   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.980888   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.980889   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.980909   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.980944   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981032   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0326 05:25:15.981153   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.981178   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.981153   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.981178   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0326 05:25:15.981289   52408 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0326 05:25:15.981337   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981350   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981366   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981400   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981403   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981408   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981441   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981450   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981453   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981467   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981486   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981495   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981507   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981507   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981415   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981524   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981531   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981498   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981368   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981558   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981568   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981587   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981589   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981594   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981603   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:15.981611   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
junit report dir: /logs/artifacts
+++ [0326 05:25:16] Clean up complete
+ make test-integration
W0326 05:25:16.977947   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.978152   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.978191   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.978198   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.978540   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.978833   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.978841   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.978832   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.978932   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.978955   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.979096   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.979112   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.979477   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.979672   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.979697   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.979843   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980182   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980244   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980244   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980259   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980259   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980321   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980392   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980503   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980516   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980577   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980580   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980644   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980647   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980660   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980700   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980701   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980733   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980809   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980851   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980891   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980923   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.980940   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981208   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981217   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981225   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981373   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981730   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981766   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981806   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981835   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981849   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981900   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981904   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981906   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981957   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981963   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981962   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981993   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981964   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.982011   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.982018   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.981963   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.982049   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.982084   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.982118   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.982136   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.982148   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.982230   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.982269   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.982295   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.982304   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:16.982319   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.276911   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.278320   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.279807   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.298223   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.298495   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.302898   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.306384   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.314861   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.322440   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.324227   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.339310   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.344574   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.349162   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.351481   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.354603   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.368348   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.368812   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.372777   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.372882   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.391939   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.414306   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.417142   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.423949   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.436569   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.443535   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.461518   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.483575   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.490422   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.492248   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.503492   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.506417   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.514477   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.517147   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.529739   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.529882   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.534804   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.537673   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.542003   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.565660   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.565877   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.566533   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.566786   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.586270   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.599672   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.611431   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.619641   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.632698   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.635288   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.662910   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.663839   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.669136   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.678965   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.683353   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.688867   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.709667   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.709902   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.711191   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.712486   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.733993   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.747660   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.784800   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.786786   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.792093   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.803319   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.812369   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.814430   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.831347   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:18.878943   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
warning: ignoring symlink /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes
go: warning: "k8s.io/kubernetes/vendor/github.com/go-bindata/go-bindata/..." matched no packages
W0326 05:25:20.389143   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:20.450837   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:20.482558   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:20.495317   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:20.498131   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0326 05:25:20.520466   52408 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
+++ [0326 05:25:20] Checking etcd is on PATH
/home/prow/go/src/k8s.io/kubernetes/third_party/etcd/etcd
+++ [0326 05:25:20] Starting etcd instance
etcd --advertise-client-urls http://127.0.0.1:2379 --data-dir /tmp/tmp.sXJ7hwHo7t --listen-client-urls http://127.0.0.1:2379 --debug > "/logs/artifacts/etcd.011e44ee-6f20-11ea-9d70-1280e1450816.root.log.DEBUG.20200326-052520.94074" 2>/dev/null
Waiting for etcd to come up.
E0326 05:25:21.621651   52408 controller.go:184] StorageError: key not found, Code: 1, Key: /registry/masterleases/10.60.19.185, ResourceVersion: 0, AdditionalErrorMsg: 
... skipping 250 lines ...
{"Time":"2020-03-26T05:32:08.301501402Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/secrets","Output":"ok  \tk8s.io/kubernetes/test/integration/secrets\t3.945s\n"}
{"Time":"2020-03-26T05:32:09.467388603Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Test":"TestFullyLabeledReplicas","Output":"ap[] OwnerReferences:[{APIVersion:apps/v1 Kind:ReplicaSet Name:rs UID:ab08b900-38cc-4f42-a7a3-9f3ab980b189 Controller:0xc01b12f6ba BlockOwnerDeletion:0xc01b12f6bb}] Finalizers:[] ClusterName: ManagedFields:[]}.\n"}
{"Time":"2020-03-26T05:32:10.887468893Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/garbagecollector","Test":"TestCRDDeletionCascading","Output":"dangling: []v1.OwnerReference{v1.OwnerReference{APIVersion:\"mygroup.example.com/v1beta1\", Kind:\"foo757r9a\", Name:\"ownerscvdt\", UID:\"dd90922c-5775-4354-b850-9d027c2a5835\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}\n"}
{"Time":"2020-03-26T05:32:13.564081917Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicaset","Output":"ok  \tk8s.io/kubernetes/test/integration/replicaset\t62.162s\n"}
{"Time":"2020-03-26T05:32:21.551877418Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/replicationcontroller","Output":"ok  \tk8s.io/kubernetes/test/integration/replicationcontroller\t60.464s\n"}
{"Time":"2020-03-26T05:32:23.997222114Z","Action":"output","Package":"k8s.io/kubernetes/test/integration/garbagecollector","Test":"TestCRDDeletionCascading","Output":"dangling: []v1.OwnerReference{v1.OwnerReference{APIVersion:\"mygroup.example.com/v1beta1\", Kind:\"foo757r9a\", Name:\"owner4ffng\", UID:\"b26ecfb9-e6dc-4784-b067-1f0249c3f78b\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}\n"}
{"Time":"2020-03-26T05:32:27.065691254Z","Action":"output","Package":"k8s.io/ku{"component":"entrypoint","file":"prow/entrypoint/run.go:168","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Entrypoint received interrupt: terminated","time":"2020-03-26T05:33:15Z"}