This job view page is being replaced by Spyglass soon. Check out the new job view.
PRrosti: kubeadm: Remove unused constants
ResultFAILURE
Tests 1 failed / 2599 succeeded
Started2020-05-23 02:21
Elapsed29m38s
Revisiondef0db6a16b4bce8b53e5cd847309de75d9cf9bb
Refs 91364
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/68c24fc0-f115-43a6-8537-ca0d3a3e0422/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/68c24fc0-f115-43a6-8537-ca0d3a3e0422/targets/test

Test Failures


k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration TestSubresourcePatch 1.95s

go test -v k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration -run TestSubresourcePatch$
=== RUN   TestSubresourcePatch
I0523 02:48:47.054525  122220 nonstructuralschema_controller.go:198] Shutting down NonStructuralSchemaConditionController
I0523 02:48:47.054539  122220 customresource_discovery_controller.go:245] Shutting down DiscoveryController
I0523 02:48:47.054551  122220 establishing_controller.go:87] Shutting down EstablishingController
I0523 02:48:47.054569  122220 dynamic_serving_content.go:145] Shutting down serving-cert::/tmp/apiextensions-apiserver543885913/apiserver.crt::/tmp/apiextensions-apiserver543885913/apiserver.key
I0523 02:48:47.054718  122220 secure_serving.go:231] Stopped listening on 127.0.0.1:34507
I0523 02:48:47.054741  122220 tlsconfig.go:255] Shutting down DynamicServingCertificateController
I0523 02:48:47.055266  122220 serving.go:325] Generated self-signed cert (/tmp/apiextensions-apiserver448511987/apiserver.crt, /tmp/apiextensions-apiserver448511987/apiserver.key)
I0523 02:48:48.055753  122220 client.go:360] parsed scheme: "endpoint"
I0523 02:48:48.055844  122220 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
W0523 02:48:48.218406  122220 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0523 02:48:48.220160  122220 client.go:360] parsed scheme: "endpoint"
I0523 02:48:48.220203  122220 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0523 02:48:48.221126  122220 client.go:360] parsed scheme: "endpoint"
I0523 02:48:48.221224  122220 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
W0523 02:48:48.225157  122220 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0523 02:48:48.229302  122220 secure_serving.go:187] Serving securely on 127.0.0.1:35115
I0523 02:48:48.229372  122220 customresource_discovery_controller.go:209] Starting DiscoveryController
I0523 02:48:48.229409  122220 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiextensions-apiserver448511987/apiserver.crt::/tmp/apiextensions-apiserver448511987/apiserver.key
I0523 02:48:48.229447  122220 tlsconfig.go:240] Starting DynamicServingCertificateController
E0523 02:48:48.230156  122220 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: failed to list *v1.Service: Get http://127.1.2.3:12345/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.1.2.3:12345: connect: connection refused
I0523 02:48:48.230203  122220 naming_controller.go:291] Starting NamingConditionController
I0523 02:48:48.230226  122220 establishing_controller.go:76] Starting EstablishingController
I0523 02:48:48.230243  122220 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0523 02:48:48.230264  122220 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0523 02:48:48.230294  122220 crd_finalizer.go:266] Starting CRDFinalizer
I0523 02:48:48.938666  122220 client.go:360] parsed scheme: "endpoint"
I0523 02:48:48.938716  122220 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0523 02:48:48.940355  122220 client.go:360] parsed scheme: "endpoint"
I0523 02:48:48.940396  122220 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
W0523 02:48:49.006881  122220 cacher.go:151] Terminating all watchers from cacher *apiextensions.CustomResourceDefinition
--- FAIL: TestSubresourcePatch (1.95s)
    testserver.go:249: Resolved testserver package path to: "/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/cmd/server/testing"
    testserver.go:142: runtime-config=map[api/all:true]
    testserver.go:143: Starting apiextensions-apiserver on port 35115...
    testserver.go:161: Waiting for /healthz to be ok...
    subresources_test.go:755: Creating foo
    subresources_test.go:766: Patching .status.num to 999
    subresources_test.go:795: Patching .status.num again to 999
    subresources_test.go:806: Applying empty patch
    subresources_test.go:817: Patching .spec.replicas to 7
    subresources_test.go:858: Patching .spec.replicas again to 7
    subresources_test.go:869: Applying empty patch
    subresources_test.go:755: Creating foo
    subresources_test.go:766: Patching .status.num to 999
    subresources_test.go:795: Patching .status.num again to 999
    basic_test.go:1046: wanted "61656" at .metadata.resourceVersion, got "61658"

				from junit_20200523-023730.xml

Filter through log files | View test history on testgrid


Show 2599 Passed Tests

Show 6 Skipped Tests

Error lines from build-log.txt

... skipping 85 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 154: bogus-expected-to-fail: command not found
!!! [0523 02:25:40] Call tree:
!!! [0523 02:25:40]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0523 02:25:41]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0523 02:25:41]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:130 juLog(...)
!!! [0523 02:25:41]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:158 record_command(...)
!!! [0523 02:25:41]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0523 02:25:41] Running kubeadm tests
+++ [0523 02:25:46] Building go targets for linux/amd64:
    cmd/kubeadm
+++ [0523 02:26:33] Running tests without code coverage
{"Time":"2020-05-23T02:28:08.887132095Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t54.074s\n"}
✓  cmd/kubeadm/test/cmd (54.077s)
... skipping 311 lines ...
I0523 02:29:59.108298   54144 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
+++ [0523 02:30:04] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [0523 02:30:36] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0523 02:30:36.905998   57740 serving.go:331] Generated self-signed cert in-memory
W0523 02:30:37.207444   57740 authentication.go:368] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0523 02:30:37.207482   57740 authentication.go:265] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0523 02:30:37.207491   57740 authentication.go:289] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0523 02:30:37.207504   57740 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0523 02:30:37.207532   57740 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0523 02:30:37.207564   57740 controllermanager.go:160] Version: v1.19.0-beta.0.137+6e30624d632e82
I0523 02:30:37.208722   57740 secure_serving.go:187] Serving securely on [::]:10257
I0523 02:30:37.208832   57740 tlsconfig.go:240] Starting DynamicServingCertificateController
I0523 02:30:37.209382   57740 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0523 02:30:37.209442   57740 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
... skipping 64 lines ...
I0523 02:30:37.484051   57740 stateful_set.go:146] Starting stateful set controller
I0523 02:30:37.484063   57740 shared_informer.go:240] Waiting for caches to sync for stateful set
I0523 02:30:37.485985   57740 controllermanager.go:532] Started "persistentvolume-expander"
W0523 02:30:37.486013   57740 controllermanager.go:511] "bootstrapsigner" is disabled
I0523 02:30:37.486788   57740 expand_controller.go:319] Starting expand controller
I0523 02:30:37.486805   57740 shared_informer.go:240] Waiting for caches to sync for expand
E0523 02:30:37.489675   57740 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0523 02:30:37.489880   57740 controllermanager.go:524] Skipping "service"
I0523 02:30:37.490291   57740 node_lifecycle_controller.go:77] Sending events to api server
E0523 02:30:37.490430   57740 core.go:230] failed to start cloud node lifecycle controller: no cloud provider provided
W0523 02:30:37.490491   57740 controllermanager.go:524] Skipping "cloud-node-lifecycle"
W0523 02:30:37.490834   57740 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0523 02:30:37.490869   57740 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0523 02:30:37.490888   57740 controllermanager.go:532] Started "serviceaccount"
I0523 02:30:37.491008   57740 serviceaccounts_controller.go:117] Starting service account controller
I0523 02:30:37.491024   57740 shared_informer.go:240] Waiting for caches to sync for service account
... skipping 123 lines ...
  "gitTreeState": "clean",
  "buildDate": "2020-05-23T00:23:11Z",
  "goVersion": "go1.13.9",
  "compiler": "gc",
  "platform": "linux/amd64"
}I0523 02:30:38.458733   57740 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
E0523 02:30:38.470770   57740 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
E0523 02:30:38.476911   57740 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0523 02:30:38.584268   57740 shared_informer.go:247] Caches are synced for stateful set 
+++ [0523 02:30:38] Testing kubectl version: check client only output matches expected output
I0523 02:30:38.604277   57740 shared_informer.go:247] Caches are synced for PVC protection 
I0523 02:30:38.657700   57740 shared_informer.go:247] Caches are synced for certificate-csrapproving 
W0523 02:30:38.718902   57740 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
Successful: the flag '--client' shows correct client info
(BSuccessful: the flag '--client' correctly has no server version info
(B+++ [0523 02:30:38] Testing kubectl version: verify json output
I0523 02:30:38.777325   57740 shared_informer.go:247] Caches are synced for daemon sets 
I0523 02:30:38.779761   57740 shared_informer.go:247] Caches are synced for GC 
I0523 02:30:38.780563   57740 shared_informer.go:247] Caches are synced for TTL 
... skipping 97 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0523 02:30:43] Creating namespace namespace-1590201043-25162
namespace/namespace-1590201043-25162 created
Context "test" modified.
+++ [0523 02:30:44] Testing RESTMapper
+++ [0523 02:30:44] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 58 lines ...
namespace/namespace-1590201049-19980 created
Context "test" modified.
+++ [0523 02:30:49] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 58 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 25 lines ...
namespace/namespace-1590201058-31981 created
Context "test" modified.
+++ [0523 02:30:58] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:155: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:156: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:157: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 462 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
has:valid-pod
core.sh:192: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:204: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:208: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:212: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:217: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 19 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:261: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:267: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:271: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:277: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 224 lines ...
core.sh:536: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.2:
(BSuccessful
message:kubectl-create kubectl-patch
has:kubectl-patch
pod/valid-pod patched
core.sh:556: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0523 02:31:32] "kubectl patch with resourceVersion 557" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:580: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:kubectl-create kubectl-patch kubectl-replace
has:kubectl-replace
Successful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0523 02:31:33.654823   57740 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
core.sh:608: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(BI0523 02:31:33.785066   57740 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"node-v1-test", UID:"bb6cdad2-8c16-4c91-af2e-2083c327e374", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node node-v1-test event: Registered Node node-v1-test in Controller
node/node-v1-test replaced (server dry run)
node/node-v1-test replaced (dry run)
core.sh:633: Successful get node node-v1-test {{range.items}}{{if .metadata.annotations.a}}found{{end}}{{end}}:: :
(Bnode/node-v1-test replaced
... skipping 30 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:685: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:689: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:693: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:697: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:701: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 84 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0523 02:31:46] Creating namespace namespace-1590201106-9335
namespace/namespace-1590201106-9335 created
Context "test" modified.
+++ [0523 02:31:46] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 42 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0523 02:31:46] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 31 lines ...
I0523 02:31:50.604162   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201107-25274", Name:"test-deployment-retainkeys-7d6d699f45", UID:"8e200de9-3261-40b7-ab47-2c8a4609c07e", APIVersion:"apps/v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-deployment-retainkeys-7d6d699f45-lxcxh
deployment.apps "test-deployment-retainkeys" deleted
apply.sh:88: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
apply.sh:92: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
apply.sh:101: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BFlag --server-dry-run has been deprecated, --server-dry-run is deprecated and can be replaced with --dry-run=server.
pod/test-pod created (server dry run)
W0523 02:31:52.259545   68031 helpers.go:552] --dry-run=true is deprecated (boolean value) and can be replaced with --dry-run=client.
... skipping 7 lines ...
(Bpod "test-pod" deleted
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0523 02:31:54.645556   54144 client.go:360] parsed scheme: "endpoint"
I0523 02:31:54.645599   54144 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0523 02:31:54.796460   54144 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj created (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
namespace/nsb created
apply.sh:154: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/a created
apply.sh:157: Successful get pods a -n nsb {{.metadata.name}}: a
(Bpod/b created
pod/a pruned
apply.sh:161: Successful get pods b -n nsb {{.metadata.name}}: b
(BSuccessful
message:Error from server (NotFound): pods "a" not found
has:pods "a" not found
pod "b" deleted
apply.sh:171: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/a created
apply.sh:176: Successful get pods a {{.metadata.name}}: a
(BSuccessful
message:Error from server (NotFound): pods "b" not found
has:pods "b" not found
pod/b created
apply.sh:184: Successful get pods a {{.metadata.name}}: a
(Bapply.sh:185: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod "a" deleted
pod "b" deleted
Successful
message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
has:all resources selected for prune without explicitly passing --all
pod/a created
pod/b created
service/prune-svc created
I0523 02:32:00.763283   57740 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1590201103-3895
I0523 02:32:03.132853   54144 client.go:360] parsed scheme: "passthrough"
... skipping 37 lines ...
apply.sh:235: Successful get pods a -n nsb {{.metadata.name}}: a
(Bpod/b created
apply.sh:238: Successful get pods b -n nsb {{.metadata.name}}: b
(Bpod/b unchanged
pod/a pruned
Successful
message:Error from server (NotFound): pods "a" not found
has:pods "a" not found
apply.sh:245: Successful get pods b -n nsb {{.metadata.name}}: b
(Bnamespace "nsb" deleted
Successful
message:error: the namespace from the provided object "nsb" does not match the namespace "foo". You must pass '--namespace=nsb' to perform this operation.
has:the namespace from the provided object "nsb" does not match the namespace "foo".
apply.sh:256: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bservice/a created
apply.sh:260: Successful get services a {{.metadata.name}}: a
(BSuccessful
message:The Service "a" is invalid: spec.clusterIP: Invalid value: "10.0.0.12": field is immutable
has:field is immutable
I0523 02:32:25.720993   57740 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"namespace-1590201107-25274", Name:"a", UID:"12d5613f-8bdb-43dd-9b67-ca9592a65a86", APIVersion:"v1", ResourceVersion:"729", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint namespace-1590201107-25274/a: Operation cannot be fulfilled on endpoints "a": StorageError: invalid object, Code: 4, Key: /registry/services/endpoints/namespace-1590201107-25274/a, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 12d5613f-8bdb-43dd-9b67-ca9592a65a86, UID in object meta: 
service/a configured
apply.sh:267: Successful get services a {{.spec.clusterIP}}: 10.0.0.12
(Bservice "a" deleted
configmap/test-the-map created
service/test-the-service created
deployment.apps/test-the-deployment created
... skipping 18 lines ...
(Bapply.sh:282: Successful get deployment test-the-deployment {{.metadata.name}}: test-the-deployment
(Bapply.sh:283: Successful get service test-the-service {{.metadata.name}}: test-the-service
(Bconfigmap "test-the-map" deleted
service "test-the-service" deleted
deployment.apps "test-the-deployment" deleted
Successful
message:Error from server (NotFound): namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
apply.sh:291: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:namespace/multi-resource-ns created
Error from server (NotFound): error when creating "hack/testdata/multi-resource-1.yaml": namespaces "multi-resource-ns" not found
has:namespaces "multi-resource-ns" not found
Successful
message:Error from server (NotFound): pods "test-pod" not found
has:pods "test-pod" not found
pod/test-pod created
namespace/multi-resource-ns unchanged
apply.sh:299: Successful get pods test-pod -n multi-resource-ns {{.metadata.name}}: test-pod
(Bpod "test-pod" deleted
namespace "multi-resource-ns" deleted
I0523 02:32:29.464740   57740 namespace_controller.go:185] Namespace has been deleted nsb
apply.sh:305: Successful get configmaps {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:configmap/foo created
error: unable to recognize "hack/testdata/multi-resource-2.yaml": no matches for kind "Bogus" in version "example.com/v1"
has:no matches for kind "Bogus" in version "example.com/v1"
apply.sh:311: Successful get configmaps foo {{.metadata.name}}: foo
(Bconfigmap "foo" deleted
apply.sh:317: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:pod/pod-a created
... skipping 5 lines ...
(Bpod "pod-a" deleted
pod "pod-c" deleted
apply.sh:325: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapply.sh:329: Successful get crds {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:customresourcedefinition.apiextensions.k8s.io/widgets.example.com created
error: unable to recognize "hack/testdata/multi-resource-4.yaml": no matches for kind "Widget" in version "example.com/v1"
has:no matches for kind "Widget" in version "example.com/v1"
I0523 02:32:36.811999   54144 client.go:360] parsed scheme: "endpoint"
I0523 02:32:36.812051   54144 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
Successful
message:Error from server (NotFound): widgets.example.com "foo" not found
has:widgets.example.com "foo" not found
apply.sh:335: Successful get crds widgets.example.com {{.metadata.name}}: widgets.example.com
(BI0523 02:32:37.156978   54144 controller.go:606] quota admission added evaluator for: widgets.example.com
widget.example.com/foo created
customresourcedefinition.apiextensions.k8s.io/widgets.example.com unchanged
apply.sh:338: Successful get widget foo {{.metadata.name}}: foo
... skipping 32 lines ...
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0523 02:32:41.016411   54144 client.go:360] parsed scheme: "endpoint"
I0523 02:32:41.016581   54144 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0523 02:32:41.117993   57740 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0523 02:32:41.118184   57740 shared_informer.go:240] Waiting for caches to sync for garbage collector
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0523 02:32:41.218479   57740 shared_informer.go:247] Caches are synced for garbage collector 
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

... skipping 10 lines ...
pod/nginx-extensions created (dry run)
pod/nginx-extensions created (server dry run)
run.sh:32: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Brun.sh:35: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/nginx-extensions created
W0523 02:32:42.242299   54144 cacher.go:151] Terminating all watchers from cacher *unstructured.Unstructured
E0523 02:32:42.243261   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
run.sh:39: Successful get pod {{range.items}}{{.metadata.name}}:{{end}}: nginx-extensions:
(Bpod "nginx-extensions" deleted
Successful
message:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_filter_tests
+++ [0523 02:32:42] Creating namespace namespace-1590201162-25603
namespace/namespace-1590201162-25603 created
Context "test" modified.
+++ [0523 02:32:43] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0523 02:32:43.280006   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 18 lines ...
apps.sh:134: Successful get deployments my-depl {{.spec.template.metadata.labels.l1}}: l1
(Bapps.sh:135: Successful get deployments my-depl {{.spec.selector.matchLabels.l1}}: l1
(Bapps.sh:136: Successful get deployments my-depl {{.metadata.labels.l1}}: <no value>
(Bdeployment.apps "my-depl" deleted
replicaset.apps "my-depl-76fb9d7d7d" deleted
pod "my-depl-76fb9d7d7d-wqx24" deleted
E0523 02:32:45.679242   57740 replica_set.go:535] sync "namespace-1590201163-5588/my-depl-76fb9d7d7d" failed with replicasets.apps "my-depl-76fb9d7d7d" not found
apps.sh:142: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:143: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:144: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0523 02:32:46.135220   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:148: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx created
I0523 02:32:46.363249   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201163-5588", Name:"nginx", UID:"12695cb9-be3b-437e-9ef9-a75d9743ab76", APIVersion:"apps/v1", ResourceVersion:"924", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9587c59df to 3
I0523 02:32:46.367885   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201163-5588", Name:"nginx-9587c59df", UID:"6d9f00e4-52e9-40e0-ae15-a3a5426079d1", APIVersion:"apps/v1", ResourceVersion:"925", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9587c59df-k8xjk
I0523 02:32:46.370933   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201163-5588", Name:"nginx-9587c59df", UID:"6d9f00e4-52e9-40e0-ae15-a3a5426079d1", APIVersion:"apps/v1", ResourceVersion:"925", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9587c59df-dwf4b
I0523 02:32:46.373484   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201163-5588", Name:"nginx-9587c59df", UID:"6d9f00e4-52e9-40e0-ae15-a3a5426079d1", APIVersion:"apps/v1", ResourceVersion:"925", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9587c59df-657xw
apps.sh:152: Successful get deployment nginx {{.metadata.name}}: nginx
(BE0523 02:32:49.713048   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1590201163-5588\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1590201163-5588"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
deployment.apps/nginx configured
I0523 02:32:56.062355   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201163-5588", Name:"nginx", UID:"9b491875-f9fe-4ced-b0e0-371c95406efb", APIVersion:"apps/v1", ResourceVersion:"968", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6c499547c4 to 3
I0523 02:32:56.068211   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201163-5588", Name:"nginx-6c499547c4", UID:"fd1b1479-bd81-4b56-b942-1b338d0d0344", APIVersion:"apps/v1", ResourceVersion:"969", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-rlvbc
I0523 02:32:56.071392   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201163-5588", Name:"nginx-6c499547c4", UID:"fd1b1479-bd81-4b56-b942-1b338d0d0344", APIVersion:"apps/v1", ResourceVersion:"969", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-898dd
I0523 02:32:56.072311   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201163-5588", Name:"nginx-6c499547c4", UID:"fd1b1479-bd81-4b56-b942-1b338d0d0344", APIVersion:"apps/v1", ResourceVersion:"969", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-gm8cn
Successful
message:        "name": "nginx2"
          "name": "nginx2"
has:"name": "nginx2"
E0523 02:32:59.850320   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0523 02:33:01.449247   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201163-5588", Name:"nginx", UID:"f7d1c743-f22f-42f1-bdf1-d23c5e7498fd", APIVersion:"apps/v1", ResourceVersion:"1004", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6c499547c4 to 3
I0523 02:33:01.454613   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201163-5588", Name:"nginx-6c499547c4", UID:"19bcdd95-0fbb-4e52-b5e8-1f70f7014a6c", APIVersion:"apps/v1", ResourceVersion:"1005", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-pkczk
I0523 02:33:01.460544   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201163-5588", Name:"nginx-6c499547c4", UID:"19bcdd95-0fbb-4e52-b5e8-1f70f7014a6c", APIVersion:"apps/v1", ResourceVersion:"1005", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-qlk9h
I0523 02:33:01.461000   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201163-5588", Name:"nginx-6c499547c4", UID:"19bcdd95-0fbb-4e52-b5e8-1f70f7014a6c", APIVersion:"apps/v1", ResourceVersion:"1005", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6c499547c4-xz9g5
Successful
message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 291 lines ...
+++ [0523 02:33:06] Creating namespace namespace-1590201186-21366
namespace/namespace-1590201186-21366 created
Context "test" modified.
+++ [0523 02:33:07] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1590201186-21366 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1590201186-21366 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0523 02:33:09.082300   72076 loader.go:375] Config loaded from file:  /tmp/tmp.m6SRIfytZO/.kube/config
I0523 02:33:09.084010   72076 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0523 02:33:09.113043   72076 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0523 02:33:09.114554   72076 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 1 milliseconds
... skipping 625 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2020-05-23T02:33:17Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fieldsType":"FieldsV1", "fieldsV1":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl-create", "operation":"Update", "time":"2020-05-23T02:33:17Z"}}, "name":"valid-pod", "namespace":"namespace-1590201196-31228", "resourceVersion":"1060", "selfLink":"/api/v1/namespaces/namespace-1590201196-31228/pods/valid-pod", "uid":"1f30bf53-f8c0-4dc5-9134-a8b619c59b7f"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2020-05-23T02:33:17Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl-create","operation":"Update","time":"2020-05-23T02:33:17Z"}],"name":"valid-pod","namespace":"namespace-1590201196-31228","resourceVersion":"1060","selfLink":"/api/v1/namespaces/namespace-1590201196-31228/pods/valid-pod","uid":"1f30bf53-f8c0-4dc5-9134-a8b619c59b7f"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2020-05-23T02:33:17Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fieldsType:FieldsV1 fieldsV1:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl-create operation:Update time:2020-05-23T02:33:17Z]] name:valid-pod namespace:namespace-1590201196-31228 resourceVersion:1060 selfLink:/api/v1/namespaces/namespace-1590201196-31228/pods/valid-pod uid:1f30bf53-f8c0-4dc5-9134-a8b619c59b7f] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
... skipping 156 lines ...
  terminationGracePeriodSeconds: 30
status:
  phase: Pending
  qosClass: Guaranteed
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
message:redis-master valid-pod
has:redis-master valid-pod
pod "redis-master" deleted
pod "valid-pod" deleted
get.sh:210: Successful get configmaps {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0523 02:33:21.863467   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
get.sh:211: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bget.sh:212: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: 
(Bconfigmap/test-the-map created
service/test-the-service created
deployment.apps/test-the-deployment created
I0523 02:33:22.308211   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201196-31228", Name:"test-the-deployment", UID:"0d7e08a2-78a2-45ed-99df-ad0561464758", APIVersion:"apps/v1", ResourceVersion:"1082", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-the-deployment-687ddb967b to 3
... skipping 25 lines ...
+++ [0523 02:33:22] Creating namespace namespace-1590201202-28383
namespace/namespace-1590201202-28383 created
Context "test" modified.
+++ [0523 02:33:23] Testing kubectl exec POD COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 3 lines ...
+++ [0523 02:33:23] Creating namespace namespace-1590201203-12449
namespace/namespace-1590201203-12449 created
Context "test" modified.
+++ [0523 02:33:23] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: the server doesn't have a resource type "foo"
has:error:
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0523 02:33:24.745718   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201203-12449", Name:"frontend", UID:"8fd4ca66-5ed0-4fd2-9dc4-c58a81fbcbe3", APIVersion:"apps/v1", ResourceVersion:"1121", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8rqbj
I0523 02:33:24.749139   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201203-12449", Name:"frontend", UID:"8fd4ca66-5ed0-4fd2-9dc4-c58a81fbcbe3", APIVersion:"apps/v1", ResourceVersion:"1121", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-nttv6
I0523 02:33:24.751199   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201203-12449", Name:"frontend", UID:"8fd4ca66-5ed0-4fd2-9dc4-c58a81fbcbe3", APIVersion:"apps/v1", ResourceVersion:"1121", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jl2cx
configmap/test-set-env-config created
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod, type/name or --filename must be specified
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-8rqbj does not have a host assigned
has not:not found
Successful
message:kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Error from server (BadRequest): pod frontend-8rqbj does not have a host assigned
has not:pod, type/name or --filename must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
I0523 02:33:26.168699   54144 client.go:360] parsed scheme: "passthrough"
I0523 02:33:26.168750   54144 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0523 02:33:26.168762   54144 clientconn.go:933] ClientConn switching balancer to "pick_first"
Successful
{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"9bf68875-dd82-43bd-83a8-128ed8a77d0a","resourceVersion":"1143","creationTimestamp":"2020-05-23T02:33:26Z"}}
... skipping 2 lines ...
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"9bf68875-dd82-43bd-83a8-128ed8a77d0a","resourceVersion":"1144","creationTimestamp":"2020-05-23T02:33:26Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"9bf68875-dd82-43bd-83a8-128ed8a77d0a"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 172 lines ...
has:Timeout exceeded while reading body
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 240 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [0523 02:33:39] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 372 lines ...
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
I0523 02:33:56.381683   54144 client.go:360] parsed scheme: "passthrough"
I0523 02:33:56.381736   54144 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0523 02:33:56.381751   54144 clientconn.go:933] ClientConn switching balancer to "pick_first"
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
+++ [0523 02:33:58] Testing recursive resources
+++ [0523 02:33:58] Creating namespace namespace-1590201238-4981
namespace/namespace-1590201238-4981 created
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BW0523 02:33:58.522252   54144 cacher.go:151] Terminating all watchers from cacher *unstructured.Unstructured
E0523 02:33:58.523567   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
W0523 02:33:58.651525   54144 cacher.go:151] Terminating all watchers from cacher *unstructured.Unstructured
E0523 02:33:58.652612   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
W0523 02:33:58.785960   54144 cacher.go:151] Terminating all watchers from cacher *unstructured.Unstructured
E0523 02:33:58.787144   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0523 02:33:58.930956   54144 cacher.go:151] Terminating all watchers from cacher *unstructured.Unstructured
E0523 02:33:58.932482   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0523 02:33:59.478965   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:Name:         busybox0
Namespace:    namespace-1590201238-4981
Priority:     0
Node:         <none>
... skipping 155 lines ...
Node-Selectors:   <none>
Tolerations:      <none>
Events:           <none>
unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0523 02:34:00.027897   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0523 02:34:00.113501   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox0 configured
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
pod/busybox1 configured
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx created
I0523 02:34:01.073929   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201238-4981", Name:"nginx", UID:"8fb518e0-f2dc-4467-83dc-f37975578374", APIVersion:"apps/v1", ResourceVersion:"1325", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-9c6f87b75 to 3
I0523 02:34:01.078316   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201238-4981", Name:"nginx-9c6f87b75", UID:"59c67a52-e868-49a0-972a-375b64db1afc", APIVersion:"apps/v1", ResourceVersion:"1326", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9c6f87b75-mpzf6
I0523 02:34:01.080769   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201238-4981", Name:"nginx-9c6f87b75", UID:"59c67a52-e868-49a0-972a-375b64db1afc", APIVersion:"apps/v1", ResourceVersion:"1326", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9c6f87b75-td4dz
I0523 02:34:01.083441   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201238-4981", Name:"nginx-9c6f87b75", UID:"59c67a52-e868-49a0-972a-375b64db1afc", APIVersion:"apps/v1", ResourceVersion:"1326", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-9c6f87b75-x8jwx
generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bkubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
E0523 02:34:01.477564   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
(BSuccessful
message:apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 38 lines ...
deployment.apps "nginx" deleted
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0523 02:34:01.922436   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
I0523 02:34:02.173571   57740 namespace_controller.go:185] Namespace has been deleted non-native-resources
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0523 02:34:02.618551   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0523 02:34:03.362990   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201238-4981", Name:"busybox0", UID:"90664100-a4cb-46aa-8c5a-e3c6d321a769", APIVersion:"v1", ResourceVersion:"1357", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-k87gv
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0523 02:34:03.369451   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201238-4981", Name:"busybox1", UID:"3f426d28-a1db-40ef-ac34-be0fcbe2314e", APIVersion:"v1", ResourceVersion:"1359", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-zsp5t
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(BE0523 02:34:05.228578   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0523 02:34:05.391843   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201238-4981", Name:"busybox0", UID:"90664100-a4cb-46aa-8c5a-e3c6d321a769", APIVersion:"v1", ResourceVersion:"1381", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-x8bvs
I0523 02:34:05.403762   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201238-4981", Name:"busybox1", UID:"3f426d28-a1db-40ef-ac34-be0fcbe2314e", APIVersion:"v1", ResourceVersion:"1387", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-p4shm
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0523 02:34:05.805720   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
I0523 02:34:06.337948   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201238-4981", Name:"nginx1-deployment", UID:"5d914faa-e6f1-4132-8897-6805d37870b3", APIVersion:"apps/v1", ResourceVersion:"1403", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-866c6857d5 to 2
deployment.apps/nginx0-deployment created
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0523 02:34:06.341964   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201238-4981", Name:"nginx1-deployment-866c6857d5", UID:"753d8443-7d4f-4e63-89ef-38fb1bc84191", APIVersion:"apps/v1", ResourceVersion:"1404", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-866c6857d5-5pkzl
I0523 02:34:06.344246   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201238-4981", Name:"nginx0-deployment", UID:"919b5648-877a-4ed5-ace4-cac2a68c126a", APIVersion:"apps/v1", ResourceVersion:"1405", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-ff7db88b6 to 2
I0523 02:34:06.345875   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201238-4981", Name:"nginx1-deployment-866c6857d5", UID:"753d8443-7d4f-4e63-89ef-38fb1bc84191", APIVersion:"apps/v1", ResourceVersion:"1404", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-866c6857d5-7vms8
I0523 02:34:06.352336   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201238-4981", Name:"nginx0-deployment-ff7db88b6", UID:"6fc2f12b-58a4-4162-a4f4-2b083b56892e", APIVersion:"apps/v1", ResourceVersion:"1409", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-ff7db88b6-hwjpl
I0523 02:34:06.364978   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201238-4981", Name:"nginx0-deployment-ff7db88b6", UID:"6fc2f12b-58a4-4162-a4f4-2b083b56892e", APIVersion:"apps/v1", ResourceVersion:"1409", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-ff7db88b6-7n9pw
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0523 02:34:08.705871   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/busybox0 created
I0523 02:34:08.800804   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201238-4981", Name:"busybox0", UID:"634d4afa-2c18-4cdc-9b62-28ba89d80ec4", APIVersion:"v1", ResourceVersion:"1455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-mhg9d
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0523 02:34:08.804284   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201238-4981", Name:"busybox1", UID:"ae363f65-0f34-4bcd-87e6-76b7d1f8e9b7", APIVersion:"v1", ResourceVersion:"1457", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-f8x9f
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0523 02:34:10] Testing kubectl(v1:namespaces)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1446: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bnamespace "my-namespace" deleted
I0523 02:34:11.467361   57740 shared_informer.go:240] Waiting for caches to sync for resource quota
I0523 02:34:11.467416   57740 shared_informer.go:247] Caches are synced for resource quota 
I0523 02:34:12.426291   57740 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0523 02:34:12.426355   57740 shared_informer.go:247] Caches are synced for garbage collector 
E0523 02:34:15.003307   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
E0523 02:34:16.488416   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace created
core.sh:1455: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
... skipping 30 lines ...
namespace "namespace-1590201207-18132" deleted
namespace "namespace-1590201207-32018" deleted
namespace "namespace-1590201209-6558" deleted
namespace "namespace-1590201212-851" deleted
namespace "namespace-1590201213-30730" deleted
namespace "namespace-1590201238-4981" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1590201039-771" deleted
... skipping 29 lines ...
namespace "namespace-1590201207-18132" deleted
namespace "namespace-1590201207-32018" deleted
namespace "namespace-1590201209-6558" deleted
namespace "namespace-1590201212-851" deleted
namespace "namespace-1590201213-30730" deleted
namespace "namespace-1590201238-4981" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
core.sh:1462: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1463: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
resourcequota/test-quota created (server dry run)
core.sh:1467: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created
core.sh:1470: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: found:
(Bresourcequota "test-quota" deleted
I0523 02:34:17.805129   57740 resource_quota_controller.go:306] Resource quota has been deleted quotas/test-quota
namespace "quotas" deleted
E0523 02:34:18.294039   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0523 02:34:18.887846   57740 horizontal.go:354] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1590201238-4981
I0523 02:34:18.890833   57740 horizontal.go:354] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1590201238-4981
core.sh:1482: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1486: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1490: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1494: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1496: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1503: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1507: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 85 lines ...
  name: test
has not:example.com
core.sh:825: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-secrets\" }}found{{end}}{{end}}:: :
(Bnamespace/test-secrets created
core.sh:829: Successful get namespaces/test-secrets {{.metadata.name}}: test-secrets
(Bcore.sh:833: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(BE0523 02:34:30.245081   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
secret/test-secret created
core.sh:837: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(BE0523 02:34:30.468134   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:838: Successful get secret/test-secret --namespace=test-secrets {{.type}}: test-type
(Bsecret "test-secret" deleted
core.sh:848: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret/test-secret created
core.sh:852: Successful get secret/test-secret --namespace=test-secrets {{.metadata.name}}: test-secret
(Bcore.sh:853: Successful get secret/test-secret --namespace=test-secrets {{.type}}: kubernetes.io/dockerconfigjson
... skipping 16 lines ...
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
I0523 02:34:34.266364   54144 client.go:360] parsed scheme: "passthrough"
I0523 02:34:34.266427   54144 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0523 02:34:34.266439   54144 clientconn.go:933] ClientConn switching balancer to "pick_first"
I0523 02:34:34.436337   57740 namespace_controller.go:185] Namespace has been deleted other
E0523 02:34:34.858253   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0523 02:34:36.362717   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 30 lines ...
+++ command: run_client_config_tests
+++ [0523 02:34:45] Creating namespace namespace-1590201285-9653
namespace/namespace-1590201285-9653 created
Context "test" modified.
+++ [0523 02:34:46] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 43 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 37 lines ...
Labels:         controller-uid=daa79291-b242-47c5-8663-172a365c90d9
                job-name=test-job
Annotations:    cronjob.kubernetes.io/instantiate: manual
Parallelism:    1
Completions:    1
Start Time:     Sat, 23 May 2020 02:34:55 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=daa79291-b242-47c5-8663-172a365c90d9
           job-name=test-job
  Containers:
   pi:
    Image:      k8s.gcr.io/perl
... skipping 467 lines ...
  type: ClusterIP
status:
  loadBalancer: {}
Successful
message:kubectl-create kubectl-set
has:kubectl-set
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1007: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(Bservice/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:1020: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:1027: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:1031: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/redis-master created
... skipping 106 lines ...
apps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:81: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:82: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bapps.sh:83: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:2 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1590201315-30563"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:latest","name":"kubernetes-pause"},{"image":"k8s.gcr.io/nginx:test-cmd","name":"app"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1590201315-30563"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(BE0523 02:35:17.544885   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind will roll back to Pod Template:
  Labels:	service=bind
  Containers:
   kubernetes-pause:
    Image:	k8s.gcr.io/pause:2.0
    Port:	<none>
... skipping 7 lines ...
(Bapps.sh:88: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:89: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
apps.sh:92: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:93: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:97: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:98: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
E0523 02:35:18.951943   57740 daemon_controller.go:291] namespace-1590201315-30563/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1590201315-30563", SelfLink:"/apis/apps/v1/namespaces/namespace-1590201315-30563/daemonsets/bind", UID:"dcbd7270-221e-4a12-b384-a32321884f66", ResourceVersion:"2008", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725798116, loc:(*time.Location)(0x716c8e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1590201315-30563\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl-client-side-apply", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0016cc560), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0016cc580)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0016cc5a0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0016cc5c0)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0016cc5e0), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0016cc600)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0016cc620), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002b8a578), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0002b8620), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0016cc640), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001b6a088)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002b8a5cc)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:101: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:102: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:103: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_rc_tests
... skipping 32 lines ...
Namespace:    namespace-1590201319-12652
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1590201319-12652
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1590201319-12652
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1590201319-12652
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1590201319-12652
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1590201319-12652
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 9 lines ...
Events:
  Type    Reason            Age   From                    Message
  ----    ------            ----  ----                    -------
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-pkv9t
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-r4l6g
  Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-mvz5s
(BE0523 02:35:21.488837   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful describe
Name:         frontend
Namespace:    namespace-1590201319-12652
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1590201319-12652
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 15 lines ...
(Bcore.sh:1211: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0523 02:35:21.847184   57740 replica_set.go:200] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1590201319-12652 /api/v1/namespaces/namespace-1590201319-12652/replicationcontrollers/frontend 4fed7f24-738a-44c5-87df-8d28e4d6ec54 2044 2 2020-05-23 02:35:20 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  [{kube-controller-manager Update v1 2020-05-23 02:35:20 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}} {kubectl-create Update v1 2020-05-23 02:35:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:replicas":{},"f:selector":{".":{},"f:app":{},"f:tier":{}},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00236d898 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0523 02:35:21.853495   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201319-12652", Name:"frontend", UID:"4fed7f24-738a-44c5-87df-8d28e4d6ec54", APIVersion:"v1", ResourceVersion:"2044", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-pkv9t
core.sh:1215: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1219: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1223: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1227: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0523 02:35:22.474661   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201319-12652", Name:"frontend", UID:"4fed7f24-738a-44c5-87df-8d28e4d6ec54", APIVersion:"v1", ResourceVersion:"2050", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-br2hm
core.sh:1231: Successful get rc frontend {{.spec.replicas}}: 3
(BE0523 02:35:22.671976   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1235: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0523 02:35:22.773643   57740 replica_set.go:200] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1590201319-12652 /api/v1/namespaces/namespace-1590201319-12652/replicationcontrollers/frontend 4fed7f24-738a-44c5-87df-8d28e4d6ec54 2055 4 2020-05-23 02:35:20 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  [{kubectl-create Update v1 2020-05-23 02:35:20 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{"f:replicas":{},"f:selector":{".":{},"f:app":{},"f:tier":{}},"f:template":{".":{},"f:metadata":{".":{},"f:creationTimestamp":{},"f:labels":{".":{},"f:app":{},"f:tier":{}}},"f:spec":{".":{},"f:containers":{".":{},"k:{\"name\":\"php-redis\"}":{".":{},"f:env":{".":{},"k:{\"name\":\"GET_HOSTS_FROM\"}":{".":{},"f:name":{},"f:value":{}}},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:ports":{".":{},"k:{\"containerPort\":80,\"protocol\":\"TCP\"}":{".":{},"f:containerPort":{},"f:protocol":{}}},"f:resources":{".":{},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}}} {kube-controller-manager Update v1 2020-05-23 02:35:22 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001ae8e28 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0523 02:35:22.778786   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201319-12652", Name:"frontend", UID:"4fed7f24-738a-44c5-87df-8d28e4d6ec54", APIVersion:"v1", ResourceVersion:"2055", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-br2hm
core.sh:1239: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller "frontend" deleted
E0523 02:35:23.139217   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicationcontroller/redis-master created
I0523 02:35:23.207466   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201319-12652", Name:"redis-master", UID:"0412ca67-f288-4fdc-899e-f19aafd3a8c9", APIVersion:"v1", ResourceVersion:"2068", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-lmnh9
replicationcontroller/redis-slave created
I0523 02:35:23.446948   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201319-12652", Name:"redis-slave", UID:"af831e29-72cc-4665-966d-2dff484c399c", APIVersion:"v1", ResourceVersion:"2073", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-l6nfb
I0523 02:35:23.451713   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201319-12652", Name:"redis-slave", UID:"af831e29-72cc-4665-966d-2dff484c399c", APIVersion:"v1", ResourceVersion:"2073", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-6w5n7
replicationcontroller/redis-master scaled
... skipping 20 lines ...
(Bdeployment.apps "nginx-deployment" deleted
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0523 02:35:24.978078   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201319-12652", Name:"nginx-deployment", UID:"6aa87922-08b0-48b4-9b91-22dfd5ab80fb", APIVersion:"apps/v1", ResourceVersion:"2156", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6866878c7b to 3
I0523 02:35:24.982278   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201319-12652", Name:"nginx-deployment-6866878c7b", UID:"50561c21-fb55-466e-a6db-26ba83fb4619", APIVersion:"apps/v1", ResourceVersion:"2157", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6866878c7b-dl54b
I0523 02:35:24.985495   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201319-12652", Name:"nginx-deployment-6866878c7b", UID:"50561c21-fb55-466e-a6db-26ba83fb4619", APIVersion:"apps/v1", ResourceVersion:"2157", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6866878c7b-998k4
... skipping 23 lines ...
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
service "frontend-5" deleted
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
Successful
message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1378: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1382: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 24 lines ...
          limits:
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0523 02:35:31.243817   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201319-12652", Name:"nginx-deployment-resources", UID:"5a613d22-7874-458c-9920-85a632fca356", APIVersion:"apps/v1", ResourceVersion:"2317", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-79666b9cd9 to 3
I0523 02:35:31.251244   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201319-12652", Name:"nginx-deployment-resources-79666b9cd9", UID:"2808d637-8653-4586-aba5-4f8ec23b52cc", APIVersion:"apps/v1", ResourceVersion:"2318", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79666b9cd9-l7dx2
I0523 02:35:31.259295   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201319-12652", Name:"nginx-deployment-resources-79666b9cd9", UID:"2808d637-8653-4586-aba5-4f8ec23b52cc", APIVersion:"apps/v1", ResourceVersion:"2318", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79666b9cd9-j5gjh
I0523 02:35:31.263717   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201319-12652", Name:"nginx-deployment-resources-79666b9cd9", UID:"2808d637-8653-4586-aba5-4f8ec23b52cc", APIVersion:"apps/v1", ResourceVersion:"2318", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-79666b9cd9-g5tmj
core.sh:1397: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1398: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1399: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0523 02:35:31.689549   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201319-12652", Name:"nginx-deployment-resources", UID:"5a613d22-7874-458c-9920-85a632fca356", APIVersion:"apps/v1", ResourceVersion:"2331", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-8b888884f to 1
I0523 02:35:31.693169   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201319-12652", Name:"nginx-deployment-resources-8b888884f", UID:"3ccd90a6-6d5d-4eec-b176-5b4f4fc75de6", APIVersion:"apps/v1", ResourceVersion:"2332", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-8b888884f-d9lng
core.sh:1402: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1403: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0523 02:35:32.133645   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201319-12652", Name:"nginx-deployment-resources", UID:"5a613d22-7874-458c-9920-85a632fca356", APIVersion:"apps/v1", ResourceVersion:"2341", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-79666b9cd9 to 2
I0523 02:35:32.140441   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201319-12652", Name:"nginx-deployment-resources-79666b9cd9", UID:"2808d637-8653-4586-aba5-4f8ec23b52cc", APIVersion:"apps/v1", ResourceVersion:"2345", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-79666b9cd9-l7dx2
I0523 02:35:32.145199   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201319-12652", Name:"nginx-deployment-resources", UID:"5a613d22-7874-458c-9920-85a632fca356", APIVersion:"apps/v1", ResourceVersion:"2343", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-76f48f979f to 1
I0523 02:35:32.149879   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201319-12652", Name:"nginx-deployment-resources-76f48f979f", UID:"aed8aadd-6c37-4385-ae17-0ddb0534264b", APIVersion:"apps/v1", ResourceVersion:"2350", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-76f48f979f-qjtrr
core.sh:1408: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 387 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1419: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1420: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1421: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 47 lines ...
                pod-template-hash=c9cc54d87
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=c9cc54d87
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 107 lines ...
apps.sh:304: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(B    Image:	k8s.gcr.io/nginx:test-cmd
deployment.apps/nginx rolled back (server dry run)
apps.sh:308: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
apps.sh:312: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:315: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
I0523 02:35:45.045768   57740 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1590201319-12652
apps.sh:319: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0523 02:35:46.306735   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201333-27630", Name:"nginx", UID:"c3454be2-9c6f-42c3-948d-65cfb72a6882", APIVersion:"apps/v1", ResourceVersion:"2573", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-697546885c to 0
I0523 02:35:46.313726   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201333-27630", Name:"nginx-697546885c", UID:"8dad5cd2-551d-4707-bb8c-28db9a63c9ad", APIVersion:"apps/v1", ResourceVersion:"2577", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-697546885c-tmg9m
I0523 02:35:46.314693   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201333-27630", Name:"nginx", UID:"c3454be2-9c6f-42c3-948d-65cfb72a6882", APIVersion:"apps/v1", ResourceVersion:"2576", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-67bfdd978 to 1
I0523 02:35:46.318437   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201333-27630", Name:"nginx-67bfdd978", UID:"6bbc699a-aa83-4b63-93fd-499d5a54ca98", APIVersion:"apps/v1", ResourceVersion:"2581", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-67bfdd978-v27gt
Successful
... skipping 149 lines ...
(Bapps.sh:363: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0523 02:35:49.420123   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201333-27630", Name:"nginx-deployment", UID:"429dc3b1-7116-47c0-b40e-43ecdb85d520", APIVersion:"apps/v1", ResourceVersion:"2646", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6d5f69bf98 to 1
I0523 02:35:49.425393   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201333-27630", Name:"nginx-deployment-6d5f69bf98", UID:"cace3b21-f38f-40ea-b73f-8a1881273812", APIVersion:"apps/v1", ResourceVersion:"2647", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6d5f69bf98-flf9x
apps.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:367: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:372: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:373: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:376: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:377: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
... skipping 49 lines ...
I0523 02:35:53.987858   57740 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1590201333-27630", Name:"nginx-deployment", UID:"2d9711b3-bdfe-49d3-9d62-c57be7b673dd", APIVersion:"apps/v1", ResourceVersion:"2785", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-75bb56f9c to 1
deployment.apps/nginx-deployment env updated
I0523 02:35:54.144193   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201333-27630", Name:"nginx-deployment-5d757cf5f8", UID:"8e94a83b-2949-4c30-9c17-58052ec00bda", APIVersion:"apps/v1", ResourceVersion:"2786", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-5d757cf5f8-cxkgq
deployment.apps/nginx-deployment env updated
deployment.apps "nginx-deployment" deleted
configmap "test-set-env-config" deleted
E0523 02:35:54.489451   57740 replica_set.go:535] sync "namespace-1590201333-27630/nginx-deployment-55fd6d5dd6" failed with replicasets.apps "nginx-deployment-55fd6d5dd6" not found
secret "test-set-env-secret" deleted
+++ exit code: 0
E0523 02:35:54.539877   57740 replica_set.go:535] sync "namespace-1590201333-27630/nginx-deployment-85f7d5566f" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-85f7d5566f": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1590201333-27630/nginx-deployment-85f7d5566f, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 36a6e6ed-cb90-4c6d-9fc6-4b9b646f52d6, UID in object meta: 
Recording: run_rs_tests
Running command: run_rs_tests
E0523 02:35:54.589794   57740 replica_set.go:535] sync "namespace-1590201333-27630/nginx-deployment-75bb56f9c" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-75bb56f9c": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1590201333-27630/nginx-deployment-75bb56f9c, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 67d5733b-d0bd-40bd-8ea9-bde753289974, UID in object meta: 

+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
+++ [0523 02:35:54] Creating namespace namespace-1590201354-17240
E0523 02:35:54.639306   57740 replica_set.go:535] sync "namespace-1590201333-27630/nginx-deployment-5d757cf5f8" failed with replicasets.apps "nginx-deployment-5d757cf5f8" not found
E0523 02:35:54.689223   57740 replica_set.go:535] sync "namespace-1590201333-27630/nginx-deployment-8486fbf9cc" failed with replicasets.apps "nginx-deployment-8486fbf9cc" not found
namespace/namespace-1590201354-17240 created
Context "test" modified.
+++ [0523 02:35:54] Testing kubectl(v1:replicasets)
apps.sh:540: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0523 02:35:55.171139   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201354-17240", Name:"frontend", UID:"1be4ba2c-622b-415f-b2e4-7479fdb6f867", APIVersion:"apps/v1", ResourceVersion:"2820", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-mkz6g
... skipping 8 lines ...
I0523 02:35:55.776649   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201354-17240", Name:"frontend", UID:"bd78e3ba-9d5e-40dd-8313-d0168ff6f29f", APIVersion:"apps/v1", ResourceVersion:"2836", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-8tfxn
I0523 02:35:55.780162   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201354-17240", Name:"frontend", UID:"bd78e3ba-9d5e-40dd-8313-d0168ff6f29f", APIVersion:"apps/v1", ResourceVersion:"2836", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-fzb4f
I0523 02:35:55.780720   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1590201354-17240", Name:"frontend", UID:"bd78e3ba-9d5e-40dd-8313-d0168ff6f29f", APIVersion:"apps/v1", ResourceVersion:"2836", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-2mjcx
apps.sh:554: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0523 02:35:55] Deleting rs
replicaset.apps "frontend" deleted
E0523 02:35:56.089385   57740 replica_set.go:535] sync "namespace-1590201354-17240/frontend" failed with replicasets.apps "frontend" not found
apps.sh:558: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:560: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-2mjcx" deleted
pod "frontend-8tfxn" deleted
pod "frontend-fzb4f" deleted
apps.sh:563: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
Namespace:    namespace-1590201354-17240
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1590201354-17240
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1590201354-17240
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1590201354-17240
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1590201354-17240
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1590201354-17240
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1590201354-17240
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1590201354-17240
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 216 lines ...
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:705: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(BSuccessful
message:kubectl-autoscale
has:kubectl-autoscale
horizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 4 lines ...
Context "test" modified.
+++ [0523 02:36:07] Testing kubectl(v1:statefulsets)
apps.sh:499: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0523 02:36:07.557749   54144 controller.go:606] quota admission added evaluator for: statefulsets.apps
statefulset.apps/nginx created
apps.sh:505: Successful get statefulset nginx {{.spec.replicas}}: 0
(BE0523 02:36:07.719647   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:506: Successful get statefulset nginx {{.status.observedGeneration}}: 1
(Bstatefulset.apps/nginx scaled
I0523 02:36:07.890294   57740 event.go:278] Event(v1.ObjectReference{Kind:"StatefulSet", Namespace:"namespace-1590201367-11073", Name:"nginx", UID:"b2284066-b7a6-403d-8e49-6b14f997c943", APIVersion:"apps/v1", ResourceVersion:"3090", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' create Pod nginx-0 in StatefulSet nginx successful
apps.sh:510: Successful get statefulset nginx {{.spec.replicas}}: 1
(Bapps.sh:511: Successful get statefulset nginx {{.status.observedGeneration}}: 2
(Bstatefulset.apps/nginx restarted
... skipping 46 lines ...
(Bapps.sh:465: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:466: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps/nginx rolled back
apps.sh:469: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:470: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
I0523 02:36:12.661729   54144 client.go:360] parsed scheme: "passthrough"
I0523 02:36:12.661785   54144 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0523 02:36:12.661796   54144 clientconn.go:933] ClientConn switching balancer to "pick_first"
apps.sh:474: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:475: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:478: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(BE0523 02:36:13.129692   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:479: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:480: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps "nginx" deleted
I0523 02:36:13.373255   57740 stateful_set.go:419] StatefulSet has been deleted namespace-1590201369-26835/nginx
+++ exit code: 0
Recording: run_lists_tests
... skipping 53 lines ...
Name:         mock
Namespace:    namespace-1590201374-15236
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 9 lines ...
replicationcontroller "mock" deleted
service/mock replaced
replicationcontroller/mock replaced
I0523 02:36:15.702674   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201374-15236", Name:"mock", UID:"649eaa43-ef02-43f2-8c75-db37d0cabf53", APIVersion:"v1", ResourceVersion:"3176", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-ncjs9
generic-resources.sh:96: Successful get services mock {{.metadata.labels.status}}: replaced
(Bgeneric-resources.sh:102: Successful get rc mock {{.metadata.labels.status}}: replaced
(BE0523 02:36:16.165781   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/mock edited
replicationcontroller/mock edited
generic-resources.sh:114: Successful get services mock {{.metadata.labels.status}}: edited
(Bgeneric-resources.sh:120: Successful get rc mock {{.metadata.labels.status}}: edited
(Bservice/mock labeled
replicationcontroller/mock labeled
... skipping 35 lines ...
Name:         mock
Namespace:    namespace-1590201374-15236
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 36 lines ...
(Bgeneric-resources.sh:80: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: mock:
(BNAME           TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/mock   ClusterIP   10.0.0.201   <none>        99/TCP    0s

NAME                         DESIRED   CURRENT   READY   AGE
replicationcontroller/mock   1         1         0       0s
E0523 02:36:20.619694   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Name:              mock
Namespace:         namespace-1590201374-15236
Labels:            app=mock
Annotations:       <none>
Selector:          app=mock
Type:              ClusterIP
... skipping 8 lines ...
Name:         mock
Namespace:    namespace-1590201374-15236
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 42 lines ...
Namespace:    namespace-1590201374-15236
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1590201374-15236
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 106 lines ...
+++ [0523 02:36:29] Testing persistent volumes
storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(Bpersistentvolume "pv0001" deleted
persistentvolume/pv0002 created
E0523 02:36:30.237818   57740 pv_protection_controller.go:118] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
(Bpersistentvolume "pv0002" deleted
persistentvolume/pv0003 created
E0523 02:36:30.677300   57740 pv_protection_controller.go:118] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
(Bpersistentvolume "pv0003" deleted
storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0523 02:36:31.203347   57740 pv_protection_controller.go:118] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:45: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
persistentvolume "pv0001" deleted
has:warning: deleting cluster-scoped resources
Successful
... skipping 539 lines ...
yes
has:the server doesn't have a resource type
Successful
message:yes
has:yes
Successful
message:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
Successful
message:yes
0
has:0
... skipping 59 lines ...
		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
legacy-script.sh:832: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
(Blegacy-script.sh:833: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
(Blegacy-script.sh:834: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
(Blegacy-script.sh:835: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
(BSuccessful
message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
has:only rbac.authorization.k8s.io/v1 is supported
rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
role.rbac.authorization.k8s.io "testing-R" deleted
warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
... skipping 20 lines ...
replicationcontroller/cassandra created
I0523 02:36:40.405809   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201399-736", Name:"cassandra", UID:"4e52ee13-b9c0-4537-b788-108a1b61591b", APIVersion:"v1", ResourceVersion:"3471", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-2nbmp
I0523 02:36:40.409518   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201399-736", Name:"cassandra", UID:"4e52ee13-b9c0-4537-b788-108a1b61591b", APIVersion:"v1", ResourceVersion:"3471", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-xpcks
service/cassandra created
Waiting for Get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}} : expected: cassandra:cassandra:cassandra:cassandra::, got: cassandra:cassandra:cassandra:cassandra:

discovery.sh:91: FAIL!
Get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}
  Expected: cassandra:cassandra:cassandra:cassandra::
  Got:      cassandra:cassandra:cassandra:cassandra:
(B
55 /home/prow/go/src/k8s.io/kubernetes/hack/lib/test.sh
(B
discovery.sh:92: Successful get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}: cassandra:cassandra:cassandra:cassandra:
(Bpod "cassandra-2nbmp" deleted
I0523 02:36:41.025986   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201399-736", Name:"cassandra", UID:"4e52ee13-b9c0-4537-b788-108a1b61591b", APIVersion:"v1", ResourceVersion:"3477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-zfhqz
I0523 02:36:41.026196   57740 event.go:278] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"namespace-1590201399-736", Name:"cassandra", UID:"25210c83-478f-444d-b654-ef622d360a52", APIVersion:"v1", ResourceVersion:"3479", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint namespace-1590201399-736/cassandra: Operation cannot be fulfilled on endpoints "cassandra": the object has been modified; please apply your changes to the latest version and try again
pod "cassandra-xpcks" deleted
I0523 02:36:41.034590   57740 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1590201399-736", Name:"cassandra", UID:"4e52ee13-b9c0-4537-b788-108a1b61591b", APIVersion:"v1", ResourceVersion:"3477", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-fxpcd
replicationcontroller "cassandra" deleted
E0523 02:36:41.040397   57740 replica_set.go:535] sync "namespace-1590201399-736/cassandra" failed with replicationcontrollers "cassandra" not found
service "cassandra" deleted
+++ exit code: 0
Recording: run_kubectl_explain_tests
Running command: run_kubectl_explain_tests

+++ Running case: test-cmd.run_kubectl_explain_tests 
... skipping 354 lines ...
some-other-random            default   0         8s
has:all-ns-test-2
namespace "all-ns-test-1" deleted
I0523 02:36:48.460669   54144 client.go:360] parsed scheme: "passthrough"
I0523 02:36:48.460723   54144 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{http://127.0.0.1:2379  <nil> 0 <nil>}] <nil> <nil>}
I0523 02:36:48.460733   54144 clientconn.go:933] ClientConn switching balancer to "pick_first"
E0523 02:36:50.547338   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0523 02:36:52.174177   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "all-ns-test-2" deleted
E0523 02:36:52.659390   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
I0523 02:36:57.113716   57740 namespace_controller.go:185] Namespace has been deleted all-ns-test-1
get.sh:376: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
get.sh:380: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bget.sh:384: Successful get nodes {{range.items}}{{.metadata.name}}:{{end}}: 127.0.0.1:
... skipping 207 lines ...
Successful
message:foo:
has:foo:
Successful
message:foo:
has:foo:
E0523 02:37:01.714797   57740 reflector.go:127] k8s.io/client-go/metadata/metadatainformer/informer.go:90: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:foo:
has:foo:
Successful
message:valid-pod:
has:valid-pod:
... skipping 493 lines ...
message:node/127.0.0.1 already uncordoned (server dry run)
has:already uncordoned
node-management.sh:145: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode/127.0.0.1 labeled
node-management.sh:150: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(BSuccessful
message:error: cannot specify both a node name and a --selector option
See 'kubectl drain -h' for help and examples
has:cannot specify both a node name
Successful
message:error: USAGE: cordon NODE [flags]
See 'kubectl cordon -h' for help and examples
has:error\: USAGE\: cordon NODE
node/127.0.0.1 already uncordoned
Successful
message:error: You must provide one or more resources by argument or filename.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
   '<resource> <name>'
   '<resource>'
has:must provide one or more resources
... skipping 14 lines ...
+++ [0523 02:37:14] Testing kubectl plugins
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/version/kubectl-version
  - warning: kubectl-version overwrites existing command: "kubectl version"
error: one plugin warning was found
has:kubectl-version overwrites existing command: "kubectl version"
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
  - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
error: one plugin warning was found
has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
has:plugins are available
Successful
message:Unable read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
error: unable to find any kubectl plugins in your PATH
has:unable to find any kubectl plugins in your PATH
Successful
message:I am plugin foo
has:plugin foo
Successful
message:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1
... skipping 10 lines ...

+++ Running case: test-cmd.run_impersonation_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_impersonation_tests
+++ [0523 02:37:15] Testing impersonation
Successful
message:error: requesting groups or user-extra for  without impersonating a user
has:without impersonating a user
certificatesigningrequest.certificates.k8s.io/foo created
authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
(Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificatesigningrequest.certificates.k8s.io/foo created
... skipping 72 lines ...
I0523 02:37:19.427555   54144 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0523 02:37:19.427566   54144 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0523 02:37:19.427607   54144 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0523 02:37:19.427611   54144 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0523 02:37:19.427628   54144 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0523 02:37:19.427646   54144 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
E0523 02:37:19.427660   54144 controller.go:184] rpc error: code = Unavailable desc = transport is closing
I0523 02:37:19.427713   54144 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0523 02:37:19.427729   54144 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
junit report dir: /logs/artifacts
+++ [0523 02:37:19] Clean up complete
+ make test-integration
+++ [0523 02:37:24] Checking etcd is on PATH
... skipping 338 lines ...
    synthetic_master_test.go:722: UPDATE_NODE_APISERVER is not set

=== SKIP: test/integration/scheduler_perf TestSchedule100Node3KPods (0.00s)
    scheduler_test.go:73: Skipping because we want to run short tests


=== Failed
=== FAIL: vendor/k8s.io/apiextensions-apiserver/test/integration TestSubresourcePatch (1.95s)
I0523 02:48:47.054525  122220 nonstructuralschema_controller.go:198] Shutting down NonStructuralSchemaConditionController
I0523 02:48:47.054539  122220 customresource_discovery_controller.go:245] Shutting down DiscoveryController
I0523 02:48:47.054551  122220 establishing_controller.go:87] Shutting down EstablishingController
I0523 02:48:47.054569  122220 dynamic_serving_content.go:145] Shutting down serving-cert::/tmp/apiextensions-apiserver543885913/apiserver.crt::/tmp/apiextensions-apiserver543885913/apiserver.key
I0523 02:48:47.054718  122220 secure_serving.go:231] Stopped listening on 127.0.0.1:34507
I0523 02:48:47.054741  122220 tlsconfig.go:255] Shutting down DynamicServingCertificateController
... skipping 7 lines ...
I0523 02:48:48.221224  122220 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
W0523 02:48:48.225157  122220 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0523 02:48:48.229302  122220 secure_serving.go:187] Serving securely on 127.0.0.1:35115
I0523 02:48:48.229372  122220 customresource_discovery_controller.go:209] Starting DiscoveryController
I0523 02:48:48.229409  122220 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiextensions-apiserver448511987/apiserver.crt::/tmp/apiextensions-apiserver448511987/apiserver.key
I0523 02:48:48.229447  122220 tlsconfig.go:240] Starting DynamicServingCertificateController
E0523 02:48:48.230156  122220 reflector.go:127] k8s.io/client-go/informers/factory.go:135: Failed to watch *v1.Service: failed to list *v1.Service: Get http://127.1.2.3:12345/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.1.2.3:12345: connect: connection refused
I0523 02:48:48.230203  122220 naming_controller.go:291] Starting NamingConditionController
I0523 02:48:48.230226  122220 establishing_controller.go:76] Starting EstablishingController
I0523 02:48:48.230243  122220 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0523 02:48:48.230264  122220 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0523 02:48:48.230294  122220 crd_finalizer.go:266] Starting CRDFinalizer
I0523 02:48:48.938666  122220 client.go:360] parsed scheme: "endpoint"
... skipping 17 lines ...
    subresources_test.go:795: Patching .status.num again to 999
    basic_test.go:1046: wanted "61656" at .metadata.resourceVersion, got "61658"


DONE 2475 tests, 6 skipped, 1 failure in 5.899s
+++ [0523 02:50:18] Saved JUnit XML test report to /logs/artifacts/junit_20200523-023730.xml
make[1]: *** [Makefile:185: test] Error 1
!!! [0523 02:50:18] Call tree:
!!! [0523 02:50:18]  1: hack/make-rules/test-integration.sh:97 runTests(...)
+++ [0523 02:50:18] Cleaning up etcd
+++ [0523 02:50:19] Integration test cleanup complete
make: *** [Makefile:204: test-integration] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
Cleaning up after docker
Stopping Docker: dockerProgram process in pidfile '/var/run/docker-ssd.pid', 1 process(es), refused to die.
... skipping 3 lines ...