This job view page is being replaced by Spyglass soon. Check out the new job view.
PRahmad-diaa: [WIP] Flatten Scheduler Struct
ResultFAILURE
Tests 1 failed / 1385 succeeded
Started2019-05-15 21:52
Elapsed29m47s
Revision
Buildergke-prow-containerd-pool-99179761-371j
Refs master:aaec77a9
73835:472ee9e1
podbafa72b6-775b-11e9-963a-0a580a6c053c
infra-commit0f0e3e066
podbafa72b6-775b-11e9-963a-0a580a6c053c
repok8s.io/kubernetes
repo-commita50e7f62f8e8df59dee25fea7b14b22190f6671c
repos{u'k8s.io/kubernetes': u'master:aaec77a94b67878ca1bdd884f2778f4388d203f2,73835:472ee9e17196916dad0451ecafe3d8d4b9dac66d'}

Test Failures


k8s.io/kubernetes/test/integration/daemonset [build failed] 0.00s

k8s.io/kubernetes/test/integration/daemonset [build failed]
from junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190515-220845.xml

Show 1385 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 320 lines ...
W0515 22:02:16.951] I0515 22:02:16.950591   47613 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0515 22:02:16.952] I0515 22:02:16.950770   47613 server.go:558] external host was not specified, using 172.17.0.2
W0515 22:02:16.952] W0515 22:02:16.950806   47613 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0515 22:02:16.952] I0515 22:02:16.951623   47613 server.go:145] Version: v1.16.0-alpha.0.74+a50e7f62f8e8df
W0515 22:02:17.432] I0515 22:02:17.431476   47613 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0515 22:02:17.433] I0515 22:02:17.431731   47613 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0515 22:02:17.433] E0515 22:02:17.433194   47613 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:17.434] E0515 22:02:17.433264   47613 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:17.434] E0515 22:02:17.433310   47613 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:17.434] E0515 22:02:17.433340   47613 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:17.434] E0515 22:02:17.433398   47613 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:17.434] E0515 22:02:17.433424   47613 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:17.435] E0515 22:02:17.433454   47613 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:17.435] E0515 22:02:17.433567   47613 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:17.435] E0515 22:02:17.433639   47613 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:17.435] E0515 22:02:17.433687   47613 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:17.435] E0515 22:02:17.433714   47613 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:17.435] E0515 22:02:17.433747   47613 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:17.436] I0515 22:02:17.433780   47613 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0515 22:02:17.436] I0515 22:02:17.433809   47613 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0515 22:02:17.437] I0515 22:02:17.436847   47613 client.go:354] parsed scheme: ""
W0515 22:02:17.437] I0515 22:02:17.436899   47613 client.go:354] scheme "" not registered, fallback to default scheme
W0515 22:02:17.438] I0515 22:02:17.436963   47613 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0515 22:02:17.438] I0515 22:02:17.437225   47613 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 361 lines ...
W0515 22:02:18.043] W0515 22:02:18.043004   47613 genericapiserver.go:347] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0515 22:02:18.430] I0515 22:02:18.430044   47613 client.go:354] parsed scheme: ""
W0515 22:02:18.431] I0515 22:02:18.430081   47613 client.go:354] scheme "" not registered, fallback to default scheme
W0515 22:02:18.431] I0515 22:02:18.430151   47613 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0515 22:02:18.431] I0515 22:02:18.430259   47613 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0515 22:02:18.431] I0515 22:02:18.430691   47613 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0515 22:02:19.048] E0515 22:02:19.048156   47613 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:19.049] E0515 22:02:19.048233   47613 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:19.049] E0515 22:02:19.048319   47613 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:19.049] E0515 22:02:19.048343   47613 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:19.050] E0515 22:02:19.048434   47613 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:19.050] E0515 22:02:19.048462   47613 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:19.050] E0515 22:02:19.048483   47613 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:19.050] E0515 22:02:19.048502   47613 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:19.051] E0515 22:02:19.048586   47613 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:19.051] E0515 22:02:19.048616   47613 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:19.051] E0515 22:02:19.048633   47613 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:19.051] E0515 22:02:19.048655   47613 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 22:02:19.052] I0515 22:02:19.048700   47613 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0515 22:02:19.052] I0515 22:02:19.048709   47613 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0515 22:02:19.052] I0515 22:02:19.050150   47613 client.go:354] parsed scheme: ""
W0515 22:02:19.052] I0515 22:02:19.050175   47613 client.go:354] scheme "" not registered, fallback to default scheme
W0515 22:02:19.053] I0515 22:02:19.050229   47613 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0515 22:02:19.053] I0515 22:02:19.050285   47613 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 17 lines ...
W0515 22:02:20.725] I0515 22:02:20.723297   47613 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
W0515 22:02:20.725] I0515 22:02:20.723148   47613 establishing_controller.go:73] Starting EstablishingController
W0515 22:02:20.725] I0515 22:02:20.723196   47613 customresource_discovery_controller.go:208] Starting DiscoveryController
W0515 22:02:20.725] I0515 22:02:20.723206   47613 naming_controller.go:288] Starting NamingConditionController
W0515 22:02:20.725] I0515 22:02:20.723222   47613 nonstructuralschema_controller.go:170] Starting NonStructuralSchemaConditionController
W0515 22:02:20.726] E0515 22:02:20.725128   47613 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg: 
W0515 22:02:20.737] I0515 22:02:20.737041   47613 log.go:172] http2: server: error reading preface from client 127.0.0.1:50954: read tcp 127.0.0.1:6443->127.0.0.1:50954: read: connection reset by peer
W0515 22:02:20.823] I0515 22:02:20.823002   47613 controller_utils.go:1036] Caches are synced for crd-autoregister controller
W0515 22:02:20.824] I0515 22:02:20.822998   47613 cache.go:39] Caches are synced for AvailableConditionController controller
W0515 22:02:20.824] I0515 22:02:20.822998   47613 cache.go:39] Caches are synced for autoregister controller
W0515 22:02:20.827] I0515 22:02:20.827239   47613 cache.go:39] Caches are synced for APIServiceRegistrationController controller
W0515 22:02:21.721] I0515 22:02:21.720578   47613 controller.go:107] OpenAPI AggregationController: Processing item 
W0515 22:02:21.721] I0515 22:02:21.720630   47613 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
... skipping 111 lines ...
W0515 22:03:03.546] I0515 22:03:03.420560   50944 controllermanager.go:523] Started "namespace"
W0515 22:03:03.546] I0515 22:03:03.420587   50944 core.go:170] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0515 22:03:03.546] W0515 22:03:03.420594   50944 controllermanager.go:515] Skipping "route"
W0515 22:03:03.546] I0515 22:03:03.420616   50944 namespace_controller.go:186] Starting namespace controller
W0515 22:03:03.547] I0515 22:03:03.420651   50944 controller_utils.go:1029] Waiting for caches to sync for namespace controller
W0515 22:03:03.547] I0515 22:03:03.420925   50944 node_lifecycle_controller.go:77] Sending events to api server
W0515 22:03:03.547] E0515 22:03:03.421005   50944 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
W0515 22:03:03.547] W0515 22:03:03.421015   50944 controllermanager.go:515] Skipping "cloud-node-lifecycle"
W0515 22:03:03.547] I0515 22:03:03.422802   50944 controllermanager.go:523] Started "horizontalpodautoscaling"
W0515 22:03:03.547] I0515 22:03:03.423013   50944 horizontal.go:156] Starting HPA controller
W0515 22:03:03.547] I0515 22:03:03.423085   50944 controller_utils.go:1029] Waiting for caches to sync for HPA controller
W0515 22:03:03.548] I0515 22:03:03.423465   50944 controllermanager.go:523] Started "csrapproving"
W0515 22:03:03.548] I0515 22:03:03.425144   50944 controllermanager.go:523] Started "persistentvolume-binder"
... skipping 17 lines ...
W0515 22:03:03.839] I0515 22:03:03.838768   50944 controllermanager.go:523] Started "job"
W0515 22:03:03.839] I0515 22:03:03.838929   50944 job_controller.go:143] Starting job controller
W0515 22:03:03.839] I0515 22:03:03.838956   50944 controller_utils.go:1029] Waiting for caches to sync for job controller
W0515 22:03:03.839] I0515 22:03:03.839282   50944 controllermanager.go:523] Started "csrcleaner"
W0515 22:03:03.840] W0515 22:03:03.839298   50944 controllermanager.go:502] "bootstrapsigner" is disabled
W0515 22:03:03.840] I0515 22:03:03.839794   50944 cleaner.go:81] Starting CSR cleaner controller
W0515 22:03:03.842] E0515 22:03:03.841843   50944 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0515 22:03:03.842] W0515 22:03:03.841902   50944 controllermanager.go:515] Skipping "service"
W0515 22:03:03.882] The Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.0.0.1": provided IP is already allocated
W0515 22:03:03.896] W0515 22:03:03.895514   50944 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0515 22:03:03.907] I0515 22:03:03.907135   50944 controller_utils.go:1036] Caches are synced for stateful set controller
W0515 22:03:03.908] I0515 22:03:03.907135   50944 controller_utils.go:1036] Caches are synced for expand controller
W0515 22:03:03.908] I0515 22:03:03.908270   50944 controller_utils.go:1036] Caches are synced for PV protection controller
W0515 22:03:03.909] I0515 22:03:03.909434   50944 controller_utils.go:1036] Caches are synced for PVC protection controller
W0515 22:03:03.910] I0515 22:03:03.910178   50944 controller_utils.go:1036] Caches are synced for deployment controller
W0515 22:03:03.910] I0515 22:03:03.910667   50944 controller_utils.go:1036] Caches are synced for ReplicaSet controller
... skipping 105 lines ...
I0515 22:03:08.258] +++ working dir: /go/src/k8s.io/kubernetes
I0515 22:03:08.262] +++ command: run_RESTMapper_evaluation_tests
I0515 22:03:08.277] +++ [0515 22:03:08] Creating namespace namespace-1557957788-9406
I0515 22:03:08.378] namespace/namespace-1557957788-9406 created
I0515 22:03:08.461] Context "test" modified.
I0515 22:03:08.471] +++ [0515 22:03:08] Testing RESTMapper
I0515 22:03:08.605] +++ [0515 22:03:08] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0515 22:03:08.631] +++ exit code: 0
I0515 22:03:08.793] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0515 22:03:08.794] bindings                                                                      true         Binding
I0515 22:03:08.794] componentstatuses                 cs                                          false        ComponentStatus
I0515 22:03:08.794] configmaps                        cm                                          true         ConfigMap
I0515 22:03:08.794] endpoints                         ep                                          true         Endpoints
... skipping 640 lines ...
I0515 22:03:30.860] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0515 22:03:31.060] (Bcore.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0515 22:03:31.173] (Bcore.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0515 22:03:31.372] (Bcore.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0515 22:03:31.484] (Bcore.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0515 22:03:31.583] (Bpod "valid-pod" force deleted
W0515 22:03:31.684] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0515 22:03:31.684] error: setting 'all' parameter but found a non empty selector. 
W0515 22:03:31.684] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0515 22:03:31.785] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0515 22:03:31.817] (Bcore.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0515 22:03:31.902] (Bnamespace/test-kubectl-describe-pod created
I0515 22:03:32.012] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0515 22:03:32.130] (Bcore.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0515 22:03:33.263] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0515 22:03:33.378] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0515 22:03:33.464] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0515 22:03:33.575] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0515 22:03:33.768] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:03:34.006] (Bpod/env-test-pod created
W0515 22:03:34.107] error: min-available and max-unavailable cannot be both specified
I0515 22:03:34.248] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0515 22:03:34.248] Name:         env-test-pod
I0515 22:03:34.248] Namespace:    test-kubectl-describe-pod
I0515 22:03:34.249] Priority:     0
I0515 22:03:34.249] Node:         <none>
I0515 22:03:34.249] Labels:       <none>
... skipping 143 lines ...
I0515 22:03:47.856] (Bservice "modified" deleted
I0515 22:03:47.961] replicationcontroller "modified" deleted
I0515 22:03:48.313] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:03:48.527] (Bpod/valid-pod created
I0515 22:03:48.659] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0515 22:03:48.844] (BSuccessful
I0515 22:03:48.845] message:Error from server: cannot restore map from string
I0515 22:03:48.845] has:cannot restore map from string
W0515 22:03:48.945] E0515 22:03:48.832242   47613 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0515 22:03:49.046] Successful
I0515 22:03:49.046] message:pod/valid-pod patched (no change)
I0515 22:03:49.047] has:patched (no change)
I0515 22:03:49.049] pod/valid-pod patched
I0515 22:03:49.162] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0515 22:03:49.271] (Bcore.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
... skipping 4 lines ...
I0515 22:03:49.778] (Bpod/valid-pod patched
I0515 22:03:49.894] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0515 22:03:49.983] (Bpod/valid-pod patched
I0515 22:03:50.098] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0515 22:03:50.291] (Bpod/valid-pod patched
I0515 22:03:50.422] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0515 22:03:50.639] (B+++ [0515 22:03:50] "kubectl patch with resourceVersion 507" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0515 22:03:50.953] pod "valid-pod" deleted
I0515 22:03:50.965] pod/valid-pod replaced
I0515 22:03:51.098] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0515 22:03:51.313] (BSuccessful
I0515 22:03:51.313] message:error: --grace-period must have --force specified
I0515 22:03:51.313] has:\-\-grace-period must have \-\-force specified
I0515 22:03:51.520] Successful
I0515 22:03:51.520] message:error: --timeout must have --force specified
I0515 22:03:51.520] has:\-\-timeout must have \-\-force specified
I0515 22:03:51.728] node/node-v1-test created
W0515 22:03:51.829] W0515 22:03:51.729194   50944 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0515 22:03:51.958] node/node-v1-test replaced
I0515 22:03:52.090] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0515 22:03:52.184] (Bnode "node-v1-test" deleted
I0515 22:03:52.303] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0515 22:03:52.657] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0515 22:03:54.016] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 12 lines ...
I0515 22:03:54.210]     name: kubernetes-pause
I0515 22:03:54.210] has:localonlyvalue
W0515 22:03:54.311] Edit cancelled, no changes made.
W0515 22:03:54.311] Edit cancelled, no changes made.
W0515 22:03:54.311] Edit cancelled, no changes made.
W0515 22:03:54.311] Edit cancelled, no changes made.
W0515 22:03:54.407] error: 'name' already has a value (valid-pod), and --overwrite is false
I0515 22:03:54.507] core.sh:585: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0515 22:03:54.519] (Bcore.sh:589: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0515 22:03:54.624] (Bcore.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0515 22:03:54.721] (Bpod/valid-pod labeled
I0515 22:03:54.837] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0515 22:03:54.940] (Bcore.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
... skipping 86 lines ...
I0515 22:04:04.081] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0515 22:04:04.085] +++ working dir: /go/src/k8s.io/kubernetes
I0515 22:04:04.089] +++ command: run_kubectl_create_error_tests
I0515 22:04:04.104] +++ [0515 22:04:04] Creating namespace namespace-1557957844-23051
I0515 22:04:04.205] namespace/namespace-1557957844-23051 created
I0515 22:04:04.299] Context "test" modified.
I0515 22:04:04.335] +++ [0515 22:04:04] Testing kubectl create with error
W0515 22:04:04.436] Error: must specify one of -f and -k
W0515 22:04:04.437] 
W0515 22:04:04.437] Create a resource from a file or from stdin.
W0515 22:04:04.437] 
W0515 22:04:04.437]  JSON and YAML formats are accepted.
W0515 22:04:04.437] 
W0515 22:04:04.437] Examples:
... skipping 41 lines ...
W0515 22:04:04.445] 
W0515 22:04:04.445] Usage:
W0515 22:04:04.445]   kubectl create -f FILENAME [options]
W0515 22:04:04.445] 
W0515 22:04:04.445] Use "kubectl <command> --help" for more information about a given command.
W0515 22:04:04.445] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0515 22:04:04.683] +++ [0515 22:04:04] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0515 22:04:04.784] kubectl convert is DEPRECATED and will be removed in a future version.
W0515 22:04:04.784] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0515 22:04:04.933] +++ exit code: 0
I0515 22:04:04.983] Recording: run_kubectl_apply_tests
I0515 22:04:04.983] Running command: run_kubectl_apply_tests
I0515 22:04:05.013] 
... skipping 20 lines ...
W0515 22:04:08.160] I0515 22:04:08.159927   47613 client.go:354] scheme "" not registered, fallback to default scheme
W0515 22:04:08.161] I0515 22:04:08.159973   47613 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0515 22:04:08.161] I0515 22:04:08.160057   47613 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0515 22:04:08.161] I0515 22:04:08.160697   47613 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0515 22:04:08.163] I0515 22:04:08.163104   47613 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
I0515 22:04:08.264] kind.mygroup.example.com/myobj serverside-applied (server dry run)
W0515 22:04:08.364] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0515 22:04:08.465] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0515 22:04:08.465] +++ exit code: 0
I0515 22:04:08.477] Recording: run_kubectl_run_tests
I0515 22:04:08.478] Running command: run_kubectl_run_tests
I0515 22:04:08.502] 
I0515 22:04:08.505] +++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 95 lines ...
I0515 22:04:11.446] Context "test" modified.
I0515 22:04:11.457] +++ [0515 22:04:11] Testing kubectl create filter
I0515 22:04:11.568] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:04:11.788] (Bpod/selector-test-pod created
I0515 22:04:11.914] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0515 22:04:12.014] (BSuccessful
I0515 22:04:12.015] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0515 22:04:12.015] has:pods "selector-test-pod-dont-apply" not found
I0515 22:04:12.106] pod "selector-test-pod" deleted
I0515 22:04:12.136] +++ exit code: 0
I0515 22:04:12.178] Recording: run_kubectl_apply_deployments_tests
I0515 22:04:12.178] Running command: run_kubectl_apply_deployments_tests
I0515 22:04:12.203] 
... skipping 38 lines ...
W0515 22:04:15.384] I0515 22:04:15.288699   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557957852-20507", Name:"nginx", UID:"f4a35add-6073-4d1f-84a0-2051baeb70b4", APIVersion:"apps/v1", ResourceVersion:"610", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8c9ccf86d to 3
W0515 22:04:15.384] I0515 22:04:15.293485   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957852-20507", Name:"nginx-8c9ccf86d", UID:"a20ea7fe-413e-4caa-8629-2d64df465676", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-w7kbl
W0515 22:04:15.384] I0515 22:04:15.298087   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957852-20507", Name:"nginx-8c9ccf86d", UID:"a20ea7fe-413e-4caa-8629-2d64df465676", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-l57g5
W0515 22:04:15.385] I0515 22:04:15.298522   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957852-20507", Name:"nginx-8c9ccf86d", UID:"a20ea7fe-413e-4caa-8629-2d64df465676", APIVersion:"apps/v1", ResourceVersion:"611", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-n2dwj
I0515 22:04:15.485] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0515 22:04:19.753] (BSuccessful
I0515 22:04:19.754] message:Error from server (Conflict): error when applying patch:
I0515 22:04:19.754] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557957852-20507\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0515 22:04:19.755] to:
I0515 22:04:19.755] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0515 22:04:19.755] Name: "nginx", Namespace: "namespace-1557957852-20507"
I0515 22:04:19.757] Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557957852-20507\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-05-15T22:04:15Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-05-15T22:04:15Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-05-15T22:04:15Z"]] "name":"nginx" "namespace":"namespace-1557957852-20507" "resourceVersion":"623" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1557957852-20507/deployments/nginx" "uid":"f4a35add-6073-4d1f-84a0-2051baeb70b4"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-05-15T22:04:15Z" "lastUpdateTime":"2019-05-15T22:04:15Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0515 22:04:19.757] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0515 22:04:19.757] has:Error from server (Conflict)
W0515 22:04:19.858] I0515 22:04:18.248115   50944 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1557957840-27230
I0515 22:04:25.088] deployment.extensions/nginx configured
W0515 22:04:25.189] I0515 22:04:25.093702   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557957852-20507", Name:"nginx", UID:"52f8fc25-6ea7-4c4f-9b97-898510e9da85", APIVersion:"apps/v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-86bb9b4d9f to 3
W0515 22:04:25.190] I0515 22:04:25.098665   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957852-20507", Name:"nginx-86bb9b4d9f", UID:"48fed568-8dae-41ce-a1fa-24dae4087ce9", APIVersion:"apps/v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-t7dgn
W0515 22:04:25.190] I0515 22:04:25.102643   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957852-20507", Name:"nginx-86bb9b4d9f", UID:"48fed568-8dae-41ce-a1fa-24dae4087ce9", APIVersion:"apps/v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-dmjwn
W0515 22:04:25.190] I0515 22:04:25.104859   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957852-20507", Name:"nginx-86bb9b4d9f", UID:"48fed568-8dae-41ce-a1fa-24dae4087ce9", APIVersion:"apps/v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-jn4rb
... skipping 193 lines ...
I0515 22:04:33.264] +++ [0515 22:04:33] Creating namespace namespace-1557957873-9860
I0515 22:04:33.349] namespace/namespace-1557957873-9860 created
I0515 22:04:33.434] Context "test" modified.
I0515 22:04:33.443] +++ [0515 22:04:33] Testing kubectl get
I0515 22:04:33.552] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:04:33.649] (BSuccessful
I0515 22:04:33.650] message:Error from server (NotFound): pods "abc" not found
I0515 22:04:33.650] has:pods "abc" not found
I0515 22:04:33.754] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:04:33.851] (BSuccessful
I0515 22:04:33.851] message:Error from server (NotFound): pods "abc" not found
I0515 22:04:33.851] has:pods "abc" not found
I0515 22:04:33.962] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:04:34.064] (BSuccessful
I0515 22:04:34.064] message:{
I0515 22:04:34.064]     "apiVersion": "v1",
I0515 22:04:34.064]     "items": [],
... skipping 23 lines ...
I0515 22:04:34.457] has not:No resources found
I0515 22:04:34.551] Successful
I0515 22:04:34.551] message:NAME
I0515 22:04:34.552] has not:No resources found
I0515 22:04:34.653] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:04:34.770] (BSuccessful
I0515 22:04:34.771] message:error: the server doesn't have a resource type "foobar"
I0515 22:04:34.771] has not:No resources found
I0515 22:04:34.865] Successful
I0515 22:04:34.865] message:No resources found.
I0515 22:04:34.866] has:No resources found
I0515 22:04:34.961] Successful
I0515 22:04:34.961] message:
I0515 22:04:34.961] has not:No resources found
I0515 22:04:35.058] Successful
I0515 22:04:35.058] message:No resources found.
I0515 22:04:35.059] has:No resources found
I0515 22:04:35.161] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:04:35.255] (BSuccessful
I0515 22:04:35.256] message:Error from server (NotFound): pods "abc" not found
I0515 22:04:35.256] has:pods "abc" not found
I0515 22:04:35.258] FAIL!
I0515 22:04:35.258] message:Error from server (NotFound): pods "abc" not found
I0515 22:04:35.258] has not:List
I0515 22:04:35.258] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0515 22:04:35.392] Successful
I0515 22:04:35.393] message:I0515 22:04:35.334748   61666 loader.go:359] Config loaded from file:  /tmp/tmp.73GA9c5b4H/.kube/config
I0515 22:04:35.393] I0515 22:04:35.336112   61666 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
I0515 22:04:35.394] I0515 22:04:35.361073   61666 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 888 lines ...
I0515 22:04:41.195] Successful
I0515 22:04:41.195] message:NAME    DATA   AGE
I0515 22:04:41.195] one     0      1s
I0515 22:04:41.195] three   0      0s
I0515 22:04:41.195] two     0      1s
I0515 22:04:41.195] STATUS    REASON          MESSAGE
I0515 22:04:41.196] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0515 22:04:41.196] has not:watch is only supported on individual resources
I0515 22:04:42.294] Successful
I0515 22:04:42.295] message:STATUS    REASON          MESSAGE
I0515 22:04:42.295] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0515 22:04:42.295] has not:watch is only supported on individual resources
I0515 22:04:42.302] +++ [0515 22:04:42] Creating namespace namespace-1557957882-21330
I0515 22:04:42.387] namespace/namespace-1557957882-21330 created
I0515 22:04:42.465] Context "test" modified.
I0515 22:04:42.577] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:04:42.784] (Bpod/valid-pod created
... skipping 104 lines ...
I0515 22:04:42.902] }
I0515 22:04:43.011] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0515 22:04:43.289] (B<no value>Successful
I0515 22:04:43.289] message:valid-pod:
I0515 22:04:43.289] has:valid-pod:
I0515 22:04:43.383] Successful
I0515 22:04:43.383] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0515 22:04:43.384] 	template was:
I0515 22:04:43.384] 		{.missing}
I0515 22:04:43.384] 	object given to jsonpath engine was:
I0515 22:04:43.385] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-05-15T22:04:42Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-05-15T22:04:42Z"}}, "name":"valid-pod", "namespace":"namespace-1557957882-21330", "resourceVersion":"722", "selfLink":"/api/v1/namespaces/namespace-1557957882-21330/pods/valid-pod", "uid":"0e176ef9-1e29-46d0-aaa4-e39b0a62793c"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0515 22:04:43.386] has:missing is not found
W0515 22:04:43.486] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0515 22:04:43.587] Successful
I0515 22:04:43.587] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0515 22:04:43.587] 	template was:
I0515 22:04:43.587] 		{{.missing}}
I0515 22:04:43.587] 	raw data was:
I0515 22:04:43.588] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-05-15T22:04:42Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-05-15T22:04:42Z"}],"name":"valid-pod","namespace":"namespace-1557957882-21330","resourceVersion":"722","selfLink":"/api/v1/namespaces/namespace-1557957882-21330/pods/valid-pod","uid":"0e176ef9-1e29-46d0-aaa4-e39b0a62793c"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0515 22:04:43.589] 	object given to template engine was:
I0515 22:04:43.589] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-05-15T22:04:42Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-05-15T22:04:42Z]] name:valid-pod namespace:namespace-1557957882-21330 resourceVersion:722 selfLink:/api/v1/namespaces/namespace-1557957882-21330/pods/valid-pod uid:0e176ef9-1e29-46d0-aaa4-e39b0a62793c] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0515 22:04:43.590] has:map has no entry for key "missing"
I0515 22:04:44.601] Successful
I0515 22:04:44.601] message:NAME        READY   STATUS    RESTARTS   AGE
I0515 22:04:44.601] valid-pod   0/1     Pending   0          1s
I0515 22:04:44.602] STATUS      REASON          MESSAGE
I0515 22:04:44.602] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0515 22:04:44.602] has:STATUS
I0515 22:04:44.603] Successful
I0515 22:04:44.603] message:NAME        READY   STATUS    RESTARTS   AGE
I0515 22:04:44.604] valid-pod   0/1     Pending   0          1s
I0515 22:04:44.604] STATUS      REASON          MESSAGE
I0515 22:04:44.604] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0515 22:04:44.604] has:valid-pod
I0515 22:04:45.701] Successful
I0515 22:04:45.701] message:pod/valid-pod
I0515 22:04:45.701] has not:STATUS
I0515 22:04:45.704] Successful
I0515 22:04:45.704] message:pod/valid-pod
... skipping 142 lines ...
I0515 22:04:46.823]   terminationGracePeriodSeconds: 30
I0515 22:04:46.823] status:
I0515 22:04:46.823]   phase: Pending
I0515 22:04:46.823]   qosClass: Guaranteed
I0515 22:04:46.823] has:name: valid-pod
I0515 22:04:46.917] Successful
I0515 22:04:46.917] message:Error from server (NotFound): pods "invalid-pod" not found
I0515 22:04:46.917] has:"invalid-pod" not found
I0515 22:04:47.012] pod "valid-pod" deleted
I0515 22:04:47.131] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:04:47.354] (Bpod/redis-master created
I0515 22:04:47.359] pod/valid-pod created
I0515 22:04:47.483] Successful
... skipping 283 lines ...
I0515 22:04:53.931] +++ command: run_kubectl_exec_pod_tests
I0515 22:04:53.943] +++ [0515 22:04:53] Creating namespace namespace-1557957893-15051
I0515 22:04:54.029] namespace/namespace-1557957893-15051 created
I0515 22:04:54.112] Context "test" modified.
I0515 22:04:54.123] +++ [0515 22:04:54] Testing kubectl exec POD COMMAND
I0515 22:04:54.217] Successful
I0515 22:04:54.217] message:Error from server (NotFound): pods "abc" not found
I0515 22:04:54.217] has:pods "abc" not found
I0515 22:04:54.435] pod/test-pod created
I0515 22:04:54.564] Successful
I0515 22:04:54.564] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0515 22:04:54.565] has not:pods "test-pod" not found
I0515 22:04:54.567] Successful
I0515 22:04:54.568] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0515 22:04:54.568] has not:pod or type/name must be specified
I0515 22:04:54.661] pod "test-pod" deleted
I0515 22:04:54.686] +++ exit code: 0
I0515 22:04:54.726] Recording: run_kubectl_exec_resource_name_tests
I0515 22:04:54.727] Running command: run_kubectl_exec_resource_name_tests
I0515 22:04:54.749] 
... skipping 2 lines ...
I0515 22:04:54.758] +++ command: run_kubectl_exec_resource_name_tests
I0515 22:04:54.773] +++ [0515 22:04:54] Creating namespace namespace-1557957894-19051
I0515 22:04:54.855] namespace/namespace-1557957894-19051 created
I0515 22:04:54.938] Context "test" modified.
I0515 22:04:54.948] +++ [0515 22:04:54] Testing kubectl exec TYPE/NAME COMMAND
I0515 22:04:55.065] Successful
I0515 22:04:55.066] message:error: the server doesn't have a resource type "foo"
I0515 22:04:55.066] has:error:
I0515 22:04:55.165] Successful
I0515 22:04:55.165] message:Error from server (NotFound): deployments.extensions "bar" not found
I0515 22:04:55.165] has:"bar" not found
I0515 22:04:55.379] pod/test-pod created
I0515 22:04:55.618] replicaset.apps/frontend created
W0515 22:04:55.718] I0515 22:04:55.623329   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957894-19051", Name:"frontend", UID:"f2d01a01-e2a3-4194-955a-f5577533b5a6", APIVersion:"apps/v1", ResourceVersion:"840", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9mhf2
W0515 22:04:55.719] I0515 22:04:55.628185   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957894-19051", Name:"frontend", UID:"f2d01a01-e2a3-4194-955a-f5577533b5a6", APIVersion:"apps/v1", ResourceVersion:"840", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qrhmj
W0515 22:04:55.719] I0515 22:04:55.628233   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957894-19051", Name:"frontend", UID:"f2d01a01-e2a3-4194-955a-f5577533b5a6", APIVersion:"apps/v1", ResourceVersion:"840", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-66gmf
I0515 22:04:55.853] configmap/test-set-env-config created
I0515 22:04:55.966] Successful
I0515 22:04:55.966] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0515 22:04:55.967] has:not implemented
I0515 22:04:56.070] Successful
I0515 22:04:56.070] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0515 22:04:56.070] has not:not found
I0515 22:04:56.072] Successful
I0515 22:04:56.072] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0515 22:04:56.073] has not:pod or type/name must be specified
I0515 22:04:56.185] Successful
I0515 22:04:56.185] message:Error from server (BadRequest): pod frontend-66gmf does not have a host assigned
I0515 22:04:56.185] has not:not found
I0515 22:04:56.187] Successful
I0515 22:04:56.188] message:Error from server (BadRequest): pod frontend-66gmf does not have a host assigned
I0515 22:04:56.188] has not:pod or type/name must be specified
I0515 22:04:56.276] pod "test-pod" deleted
I0515 22:04:56.378] replicaset.extensions "frontend" deleted
I0515 22:04:56.479] configmap "test-set-env-config" deleted
I0515 22:04:56.505] +++ exit code: 0
I0515 22:04:56.547] Recording: run_create_secret_tests
I0515 22:04:56.547] Running command: run_create_secret_tests
I0515 22:04:56.573] 
I0515 22:04:56.575] +++ Running case: test-cmd.run_create_secret_tests 
I0515 22:04:56.578] +++ working dir: /go/src/k8s.io/kubernetes
I0515 22:04:56.581] +++ command: run_create_secret_tests
I0515 22:04:56.690] Successful
I0515 22:04:56.690] message:Error from server (NotFound): secrets "mysecret" not found
I0515 22:04:56.690] has:secrets "mysecret" not found
I0515 22:04:56.871] Successful
I0515 22:04:56.871] message:Error from server (NotFound): secrets "mysecret" not found
I0515 22:04:56.871] has:secrets "mysecret" not found
I0515 22:04:56.873] Successful
I0515 22:04:56.873] message:user-specified
I0515 22:04:56.874] has:user-specified
I0515 22:04:56.959] Successful
I0515 22:04:57.044] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"2f19f816-9450-46ab-b68b-12e40f2ada44","resourceVersion":"860","creationTimestamp":"2019-05-15T22:04:57Z"}}
... skipping 164 lines ...
I0515 22:05:00.360] valid-pod   0/1     Pending   0          0s
I0515 22:05:00.360] has:valid-pod
I0515 22:05:01.483] Successful
I0515 22:05:01.483] message:NAME        READY   STATUS    RESTARTS   AGE
I0515 22:05:01.484] valid-pod   0/1     Pending   0          0s
I0515 22:05:01.484] STATUS      REASON          MESSAGE
I0515 22:05:01.484] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0515 22:05:01.484] has:Timeout exceeded while reading body
I0515 22:05:01.584] Successful
I0515 22:05:01.584] message:NAME        READY   STATUS    RESTARTS   AGE
I0515 22:05:01.585] valid-pod   0/1     Pending   0          1s
I0515 22:05:01.585] has:valid-pod
I0515 22:05:01.678] Successful
I0515 22:05:01.678] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0515 22:05:01.679] has:Invalid timeout value
I0515 22:05:01.780] pod "valid-pod" deleted
I0515 22:05:01.807] +++ exit code: 0
I0515 22:05:01.860] Recording: run_crd_tests
I0515 22:05:01.861] Running command: run_crd_tests
I0515 22:05:01.886] 
... skipping 250 lines ...
W0515 22:05:07.586] I0515 22:05:06.742629   50944 controller_utils.go:1036] Caches are synced for garbage collector controller
I0515 22:05:07.686] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0515 22:05:07.707] (Bfoo.company.com/test patched
I0515 22:05:07.826] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0515 22:05:07.924] (Bfoo.company.com/test patched
I0515 22:05:08.039] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0515 22:05:08.233] (B+++ [0515 22:05:08] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0515 22:05:08.307] {
I0515 22:05:08.308]     "apiVersion": "company.com/v1",
I0515 22:05:08.308]     "kind": "Foo",
I0515 22:05:08.308]     "metadata": {
I0515 22:05:08.308]         "annotations": {
I0515 22:05:08.308]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 305 lines ...
I0515 22:05:17.742] (Bnamespace/non-native-resources created
I0515 22:05:17.968] bar.company.com/test created
I0515 22:05:18.100] crd.sh:456: Successful get bars {{len .items}}: 1
I0515 22:05:18.191] (Bnamespace "non-native-resources" deleted
I0515 22:05:23.480] crd.sh:459: Successful get bars {{len .items}}: 0
I0515 22:05:23.680] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0515 22:05:23.781] Error from server (NotFound): namespaces "non-native-resources" not found
I0515 22:05:23.882] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0515 22:05:23.921] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0515 22:05:24.045] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0515 22:05:24.090] +++ exit code: 0
I0515 22:05:24.157] Recording: run_cmd_with_img_tests
I0515 22:05:24.157] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0515 22:05:24.511] I0515 22:05:24.511083   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957924-12272", Name:"test1-7b9c75bcb9", UID:"82f7a400-f98d-4bec-b2d1-9344e2525274", APIVersion:"apps/v1", ResourceVersion:"1013", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-7b9c75bcb9-vq9sv
I0515 22:05:24.612] Successful
I0515 22:05:24.612] message:deployment.apps/test1 created
I0515 22:05:24.613] has:deployment.apps/test1 created
I0515 22:05:24.617] deployment.extensions "test1" deleted
I0515 22:05:24.717] Successful
I0515 22:05:24.717] message:error: Invalid image name "InvalidImageName": invalid reference format
I0515 22:05:24.718] has:error: Invalid image name "InvalidImageName": invalid reference format
I0515 22:05:24.734] +++ exit code: 0
I0515 22:05:24.797] +++ [0515 22:05:24] Testing recursive resources
I0515 22:05:24.806] +++ [0515 22:05:24] Creating namespace namespace-1557957924-6509
I0515 22:05:24.899] namespace/namespace-1557957924-6509 created
I0515 22:05:24.982] Context "test" modified.
I0515 22:05:25.107] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:05:25.464] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:25.467] (BSuccessful
I0515 22:05:25.467] message:pod/busybox0 created
I0515 22:05:25.468] pod/busybox1 created
I0515 22:05:25.468] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0515 22:05:25.468] has:error validating data: kind not set
I0515 22:05:25.583] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:25.806] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0515 22:05:25.809] (BSuccessful
I0515 22:05:25.810] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0515 22:05:25.810] has:Object 'Kind' is missing
I0515 22:05:25.920] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:26.292] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0515 22:05:26.296] (BSuccessful
I0515 22:05:26.296] message:pod/busybox0 replaced
I0515 22:05:26.296] pod/busybox1 replaced
I0515 22:05:26.297] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0515 22:05:26.297] has:error validating data: kind not set
I0515 22:05:26.436] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:26.556] (BSuccessful
I0515 22:05:26.557] message:Name:         busybox0
I0515 22:05:26.557] Namespace:    namespace-1557957924-6509
I0515 22:05:26.557] Priority:     0
I0515 22:05:26.557] Node:         <none>
... skipping 153 lines ...
I0515 22:05:26.571] has:Object 'Kind' is missing
I0515 22:05:26.675] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:26.900] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0515 22:05:26.904] (BSuccessful
I0515 22:05:26.904] message:pod/busybox0 annotated
I0515 22:05:26.904] pod/busybox1 annotated
I0515 22:05:26.905] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0515 22:05:26.905] has:Object 'Kind' is missing
I0515 22:05:27.015] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:27.390] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0515 22:05:27.392] (BSuccessful
I0515 22:05:27.392] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0515 22:05:27.392] pod/busybox0 configured
I0515 22:05:27.392] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0515 22:05:27.392] pod/busybox1 configured
I0515 22:05:27.393] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0515 22:05:27.393] has:error validating data: kind not set
I0515 22:05:27.492] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:05:27.710] (Bdeployment.apps/nginx created
W0515 22:05:27.811] I0515 22:05:27.717459   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557957924-6509", Name:"nginx", UID:"72bf837e-fc39-4d94-82b5-f3ee684b841e", APIVersion:"apps/v1", ResourceVersion:"1038", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-958dc566b to 3
W0515 22:05:27.812] I0515 22:05:27.722039   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957924-6509", Name:"nginx-958dc566b", UID:"dcee4a7a-01f5-446d-8a88-c9104d2cb4be", APIVersion:"apps/v1", ResourceVersion:"1039", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-9tmmn
W0515 22:05:27.812] I0515 22:05:27.725747   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957924-6509", Name:"nginx-958dc566b", UID:"dcee4a7a-01f5-446d-8a88-c9104d2cb4be", APIVersion:"apps/v1", ResourceVersion:"1039", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-mp8j8
W0515 22:05:27.813] I0515 22:05:27.726651   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957924-6509", Name:"nginx-958dc566b", UID:"dcee4a7a-01f5-446d-8a88-c9104d2cb4be", APIVersion:"apps/v1", ResourceVersion:"1039", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-zqlhf
... skipping 49 lines ...
W0515 22:05:28.368] I0515 22:05:28.352161   50944 namespace_controller.go:171] Namespace has been deleted non-native-resources
I0515 22:05:28.468] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:28.588] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:28.591] (BSuccessful
I0515 22:05:28.591] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0515 22:05:28.592] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0515 22:05:28.592] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0515 22:05:28.592] has:Object 'Kind' is missing
I0515 22:05:28.700] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:28.804] (BSuccessful
I0515 22:05:28.804] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0515 22:05:28.804] has:busybox0:busybox1:
I0515 22:05:28.806] Successful
I0515 22:05:28.806] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0515 22:05:28.807] has:Object 'Kind' is missing
I0515 22:05:28.912] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:29.020] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0515 22:05:29.127] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0515 22:05:29.129] (BSuccessful
I0515 22:05:29.129] message:pod/busybox0 labeled
I0515 22:05:29.130] pod/busybox1 labeled
I0515 22:05:29.130] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0515 22:05:29.130] has:Object 'Kind' is missing
I0515 22:05:29.243] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:29.356] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0515 22:05:29.465] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0515 22:05:29.467] (BSuccessful
I0515 22:05:29.467] message:pod/busybox0 patched
I0515 22:05:29.468] pod/busybox1 patched
I0515 22:05:29.468] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0515 22:05:29.468] has:Object 'Kind' is missing
I0515 22:05:29.579] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:29.791] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:05:29.794] (BSuccessful
I0515 22:05:29.794] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0515 22:05:29.794] pod "busybox0" force deleted
I0515 22:05:29.795] pod "busybox1" force deleted
I0515 22:05:29.795] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0515 22:05:29.795] has:Object 'Kind' is missing
I0515 22:05:29.907] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:05:30.124] (Breplicationcontroller/busybox0 created
I0515 22:05:30.129] replicationcontroller/busybox1 created
W0515 22:05:30.230] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0515 22:05:30.230] I0515 22:05:30.129786   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557957924-6509", Name:"busybox0", UID:"21dfd40d-8f86-4e55-8e74-90bb63e30158", APIVersion:"v1", ResourceVersion:"1069", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-vsj2s
W0515 22:05:30.231] I0515 22:05:30.133522   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557957924-6509", Name:"busybox1", UID:"1c694beb-3624-4667-bbf5-dd57023b01e0", APIVersion:"v1", ResourceVersion:"1071", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-bdctj
I0515 22:05:30.331] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:30.373] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:30.487] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0515 22:05:30.603] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0515 22:05:30.827] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0515 22:05:30.940] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0515 22:05:30.943] (BSuccessful
I0515 22:05:30.943] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0515 22:05:30.943] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0515 22:05:30.944] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0515 22:05:30.944] has:Object 'Kind' is missing
I0515 22:05:31.037] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0515 22:05:31.134] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0515 22:05:31.263] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:31.371] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0515 22:05:31.479] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0515 22:05:31.710] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0515 22:05:31.820] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0515 22:05:31.823] (BSuccessful
I0515 22:05:31.823] message:service/busybox0 exposed
I0515 22:05:31.823] service/busybox1 exposed
I0515 22:05:31.824] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0515 22:05:31.824] has:Object 'Kind' is missing
I0515 22:05:31.934] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:32.043] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0515 22:05:32.144] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0515 22:05:32.377] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0515 22:05:32.483] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0515 22:05:32.486] (BSuccessful
I0515 22:05:32.487] message:replicationcontroller/busybox0 scaled
I0515 22:05:32.487] replicationcontroller/busybox1 scaled
I0515 22:05:32.487] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0515 22:05:32.487] has:Object 'Kind' is missing
W0515 22:05:32.588] I0515 22:05:32.249002   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557957924-6509", Name:"busybox0", UID:"21dfd40d-8f86-4e55-8e74-90bb63e30158", APIVersion:"v1", ResourceVersion:"1090", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-qq8t6
W0515 22:05:32.589] I0515 22:05:32.258738   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557957924-6509", Name:"busybox1", UID:"1c694beb-3624-4667-bbf5-dd57023b01e0", APIVersion:"v1", ResourceVersion:"1094", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-lhd86
I0515 22:05:32.689] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:32.819] (Bgeneric-resources.sh:381: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:05:32.822] (BSuccessful
I0515 22:05:32.822] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0515 22:05:32.822] replicationcontroller "busybox0" force deleted
I0515 22:05:32.822] replicationcontroller "busybox1" force deleted
I0515 22:05:32.823] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0515 22:05:32.823] has:Object 'Kind' is missing
I0515 22:05:32.940] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:05:33.159] (Bdeployment.apps/nginx1-deployment created
I0515 22:05:33.164] deployment.apps/nginx0-deployment created
W0515 22:05:33.265] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0515 22:05:33.266] I0515 22:05:33.164921   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557957924-6509", Name:"nginx1-deployment", UID:"af4e5d81-1c0b-4491-933a-c5b4e9105937", APIVersion:"apps/v1", ResourceVersion:"1110", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-67c99bcc6b to 2
W0515 22:05:33.266] I0515 22:05:33.169479   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957924-6509", Name:"nginx1-deployment-67c99bcc6b", UID:"16f46c5b-b994-4eb7-a4e5-59e180b0d170", APIVersion:"apps/v1", ResourceVersion:"1111", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-67c99bcc6b-4pzxs
W0515 22:05:33.266] I0515 22:05:33.170550   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557957924-6509", Name:"nginx0-deployment", UID:"45f23685-8a71-4861-8c2d-684feb580a8f", APIVersion:"apps/v1", ResourceVersion:"1112", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-5886cf98fc to 2
W0515 22:05:33.267] I0515 22:05:33.174451   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957924-6509", Name:"nginx0-deployment-5886cf98fc", UID:"78c44b8c-446c-4936-8666-08bd3d7f4cbb", APIVersion:"apps/v1", ResourceVersion:"1115", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-5886cf98fc-bd485
W0515 22:05:33.267] I0515 22:05:33.178551   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957924-6509", Name:"nginx1-deployment-67c99bcc6b", UID:"16f46c5b-b994-4eb7-a4e5-59e180b0d170", APIVersion:"apps/v1", ResourceVersion:"1111", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-67c99bcc6b-hchfs
W0515 22:05:33.267] I0515 22:05:33.179164   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957924-6509", Name:"nginx0-deployment-5886cf98fc", UID:"78c44b8c-446c-4936-8666-08bd3d7f4cbb", APIVersion:"apps/v1", ResourceVersion:"1115", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-5886cf98fc-7bj2v
I0515 22:05:33.368] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0515 22:05:33.433] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0515 22:05:33.676] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0515 22:05:33.679] (BSuccessful
I0515 22:05:33.679] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0515 22:05:33.679] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0515 22:05:33.680] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0515 22:05:33.680] has:Object 'Kind' is missing
I0515 22:05:33.793] deployment.apps/nginx1-deployment paused
I0515 22:05:33.800] deployment.apps/nginx0-deployment paused
I0515 22:05:33.931] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0515 22:05:33.934] (BSuccessful
I0515 22:05:33.934] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0515 22:05:34.348] 1         <none>
I0515 22:05:34.348] 
I0515 22:05:34.348] deployment.apps/nginx0-deployment 
I0515 22:05:34.349] REVISION  CHANGE-CAUSE
I0515 22:05:34.349] 1         <none>
I0515 22:05:34.349] 
I0515 22:05:34.349] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0515 22:05:34.349] has:nginx0-deployment
I0515 22:05:34.350] Successful
I0515 22:05:34.350] message:deployment.apps/nginx1-deployment 
I0515 22:05:34.351] REVISION  CHANGE-CAUSE
I0515 22:05:34.351] 1         <none>
I0515 22:05:34.351] 
I0515 22:05:34.351] deployment.apps/nginx0-deployment 
I0515 22:05:34.351] REVISION  CHANGE-CAUSE
I0515 22:05:34.351] 1         <none>
I0515 22:05:34.351] 
I0515 22:05:34.352] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0515 22:05:34.352] has:nginx1-deployment
I0515 22:05:34.352] Successful
I0515 22:05:34.352] message:deployment.apps/nginx1-deployment 
I0515 22:05:34.353] REVISION  CHANGE-CAUSE
I0515 22:05:34.353] 1         <none>
I0515 22:05:34.353] 
I0515 22:05:34.353] deployment.apps/nginx0-deployment 
I0515 22:05:34.353] REVISION  CHANGE-CAUSE
I0515 22:05:34.353] 1         <none>
I0515 22:05:34.353] 
I0515 22:05:34.354] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0515 22:05:34.354] has:Object 'Kind' is missing
I0515 22:05:34.451] deployment.apps "nginx1-deployment" force deleted
I0515 22:05:34.460] deployment.apps "nginx0-deployment" force deleted
W0515 22:05:34.560] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0515 22:05:34.561] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0515 22:05:35.588] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:05:35.810] (Breplicationcontroller/busybox0 created
I0515 22:05:35.815] replicationcontroller/busybox1 created
W0515 22:05:35.916] I0515 22:05:35.815180   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557957924-6509", Name:"busybox0", UID:"20437449-b680-41f8-97d7-f948f6247e7b", APIVersion:"v1", ResourceVersion:"1161", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-wvbxv
W0515 22:05:35.917] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0515 22:05:35.917] I0515 22:05:35.821829   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557957924-6509", Name:"busybox1", UID:"018f9178-143e-4f2a-ae64-c63264833da8", APIVersion:"v1", ResourceVersion:"1163", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-6vb8h
I0515 22:05:36.017] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0515 22:05:36.076] (BSuccessful
I0515 22:05:36.076] message:no rollbacker has been implemented for "ReplicationController"
I0515 22:05:36.077] no rollbacker has been implemented for "ReplicationController"
I0515 22:05:36.077] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
I0515 22:05:36.079] message:no rollbacker has been implemented for "ReplicationController"
I0515 22:05:36.080] no rollbacker has been implemented for "ReplicationController"
I0515 22:05:36.080] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0515 22:05:36.080] has:Object 'Kind' is missing
I0515 22:05:36.195] Successful
I0515 22:05:36.195] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0515 22:05:36.196] error: replicationcontrollers "busybox0" pausing is not supported
I0515 22:05:36.196] error: replicationcontrollers "busybox1" pausing is not supported
I0515 22:05:36.196] has:Object 'Kind' is missing
I0515 22:05:36.198] Successful
I0515 22:05:36.198] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0515 22:05:36.199] error: replicationcontrollers "busybox0" pausing is not supported
I0515 22:05:36.199] error: replicationcontrollers "busybox1" pausing is not supported
I0515 22:05:36.199] has:replicationcontrollers "busybox0" pausing is not supported
I0515 22:05:36.201] Successful
I0515 22:05:36.202] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0515 22:05:36.202] error: replicationcontrollers "busybox0" pausing is not supported
I0515 22:05:36.202] error: replicationcontrollers "busybox1" pausing is not supported
I0515 22:05:36.202] has:replicationcontrollers "busybox1" pausing is not supported
I0515 22:05:36.316] Successful
I0515 22:05:36.317] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0515 22:05:36.317] error: replicationcontrollers "busybox0" resuming is not supported
I0515 22:05:36.317] error: replicationcontrollers "busybox1" resuming is not supported
I0515 22:05:36.318] has:Object 'Kind' is missing
I0515 22:05:36.318] Successful
I0515 22:05:36.319] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0515 22:05:36.319] error: replicationcontrollers "busybox0" resuming is not supported
I0515 22:05:36.319] error: replicationcontrollers "busybox1" resuming is not supported
I0515 22:05:36.319] has:replicationcontrollers "busybox0" resuming is not supported
I0515 22:05:36.321] Successful
I0515 22:05:36.322] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0515 22:05:36.322] error: replicationcontrollers "busybox0" resuming is not supported
I0515 22:05:36.322] error: replicationcontrollers "busybox1" resuming is not supported
I0515 22:05:36.322] has:replicationcontrollers "busybox0" resuming is not supported
I0515 22:05:36.416] replicationcontroller "busybox0" force deleted
I0515 22:05:36.421] replicationcontroller "busybox1" force deleted
W0515 22:05:36.522] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0515 22:05:36.523] I0515 22:05:36.404988   50944 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0515 22:05:36.523] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
W0515 22:05:36.523] I0515 22:05:36.505340   50944 controller_utils.go:1036] Caches are synced for resource quota controller
W0515 22:05:36.945] I0515 22:05:36.945205   50944 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0515 22:05:37.046] I0515 22:05:37.045724   50944 controller_utils.go:1036] Caches are synced for garbage collector controller
I0515 22:05:37.434] Recording: run_namespace_tests
I0515 22:05:37.434] Running command: run_namespace_tests
I0515 22:05:37.466] 
... skipping 3 lines ...
I0515 22:05:37.492] +++ [0515 22:05:37] Testing kubectl(v1:namespaces)
I0515 22:05:37.581] namespace/my-namespace created
I0515 22:05:37.699] core.sh:1321: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0515 22:05:37.785] (Bnamespace "my-namespace" deleted
I0515 22:05:42.911] namespace/my-namespace condition met
I0515 22:05:43.019] Successful
I0515 22:05:43.020] message:Error from server (NotFound): namespaces "my-namespace" not found
I0515 22:05:43.020] has: not found
I0515 22:05:43.103] namespace/my-namespace created
I0515 22:05:43.221] core.sh:1330: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0515 22:05:43.455] (BSuccessful
I0515 22:05:43.455] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0515 22:05:43.455] namespace "kube-node-lease" deleted
... skipping 30 lines ...
I0515 22:05:43.460] namespace "namespace-1557957898-9841" deleted
I0515 22:05:43.460] namespace "namespace-1557957899-24956" deleted
I0515 22:05:43.460] namespace "namespace-1557957901-10427" deleted
I0515 22:05:43.460] namespace "namespace-1557957903-12628" deleted
I0515 22:05:43.460] namespace "namespace-1557957924-12272" deleted
I0515 22:05:43.460] namespace "namespace-1557957924-6509" deleted
I0515 22:05:43.460] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0515 22:05:43.461] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0515 22:05:43.461] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0515 22:05:43.461] has:warning: deleting cluster-scoped resources
I0515 22:05:43.461] Successful
I0515 22:05:43.461] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0515 22:05:43.461] namespace "kube-node-lease" deleted
I0515 22:05:43.462] namespace "my-namespace" deleted
I0515 22:05:43.462] namespace "namespace-1557957785-9272" deleted
... skipping 28 lines ...
I0515 22:05:43.464] namespace "namespace-1557957898-9841" deleted
I0515 22:05:43.464] namespace "namespace-1557957899-24956" deleted
I0515 22:05:43.464] namespace "namespace-1557957901-10427" deleted
I0515 22:05:43.465] namespace "namespace-1557957903-12628" deleted
I0515 22:05:43.465] namespace "namespace-1557957924-12272" deleted
I0515 22:05:43.465] namespace "namespace-1557957924-6509" deleted
I0515 22:05:43.465] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0515 22:05:43.465] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0515 22:05:43.465] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0515 22:05:43.465] has:namespace "my-namespace" deleted
I0515 22:05:43.597] core.sh:1342: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0515 22:05:43.689] (Bnamespace/other created
I0515 22:05:43.806] core.sh:1346: Successful get namespaces/other {{.metadata.name}}: other
I0515 22:05:43.919] (Bcore.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:05:44.147] (Bpod/valid-pod created
I0515 22:05:44.285] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0515 22:05:44.406] (Bcore.sh:1356: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0515 22:05:44.508] (BSuccessful
I0515 22:05:44.508] message:error: a resource cannot be retrieved by name across all namespaces
I0515 22:05:44.508] has:a resource cannot be retrieved by name across all namespaces
I0515 22:05:44.620] core.sh:1363: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0515 22:05:44.720] (Bpod "valid-pod" force deleted
W0515 22:05:44.821] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0515 22:05:44.922] core.sh:1367: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:05:44.929] (Bnamespace "other" deleted
... skipping 151 lines ...
I0515 22:06:06.397] +++ command: run_client_config_tests
I0515 22:06:06.410] +++ [0515 22:06:06] Creating namespace namespace-1557957966-17725
I0515 22:06:06.496] namespace/namespace-1557957966-17725 created
I0515 22:06:06.587] Context "test" modified.
I0515 22:06:06.597] +++ [0515 22:06:06] Testing client config
I0515 22:06:06.683] Successful
I0515 22:06:06.683] message:error: stat missing: no such file or directory
I0515 22:06:06.683] has:missing: no such file or directory
I0515 22:06:06.766] Successful
I0515 22:06:06.766] message:error: stat missing: no such file or directory
I0515 22:06:06.766] has:missing: no such file or directory
I0515 22:06:06.852] Successful
I0515 22:06:06.853] message:error: stat missing: no such file or directory
I0515 22:06:06.853] has:missing: no such file or directory
I0515 22:06:06.941] Successful
I0515 22:06:06.941] message:Error in configuration: context was not found for specified context: missing-context
I0515 22:06:06.942] has:context was not found for specified context: missing-context
I0515 22:06:07.030] Successful
I0515 22:06:07.030] message:error: no server found for cluster "missing-cluster"
I0515 22:06:07.030] has:no server found for cluster "missing-cluster"
I0515 22:06:07.122] Successful
I0515 22:06:07.122] message:error: auth info "missing-user" does not exist
I0515 22:06:07.122] has:auth info "missing-user" does not exist
I0515 22:06:07.294] Successful
I0515 22:06:07.294] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0515 22:06:07.295] has:Error loading config file
I0515 22:06:07.395] Successful
I0515 22:06:07.396] message:error: stat missing-config: no such file or directory
I0515 22:06:07.396] has:no such file or directory
I0515 22:06:07.412] +++ exit code: 0
I0515 22:06:07.473] Recording: run_service_accounts_tests
I0515 22:06:07.474] Running command: run_service_accounts_tests
I0515 22:06:07.497] 
I0515 22:06:07.499] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 35 lines ...
I0515 22:06:14.618] Labels:                        run=pi
I0515 22:06:14.618] Annotations:                   <none>
I0515 22:06:14.618] Schedule:                      59 23 31 2 *
I0515 22:06:14.618] Concurrency Policy:            Allow
I0515 22:06:14.619] Suspend:                       False
I0515 22:06:14.619] Successful Job History Limit:  3
I0515 22:06:14.619] Failed Job History Limit:      1
I0515 22:06:14.619] Starting Deadline Seconds:     <unset>
I0515 22:06:14.619] Selector:                      <unset>
I0515 22:06:14.619] Parallelism:                   <unset>
I0515 22:06:14.619] Completions:                   <unset>
I0515 22:06:14.619] Pod Template:
I0515 22:06:14.619]   Labels:  run=pi
... skipping 33 lines ...
I0515 22:06:15.290]                 run=pi
I0515 22:06:15.290] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0515 22:06:15.290] Controlled By:  CronJob/pi
I0515 22:06:15.290] Parallelism:    1
I0515 22:06:15.290] Completions:    1
I0515 22:06:15.290] Start Time:     Wed, 15 May 2019 22:06:14 +0000
I0515 22:06:15.290] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0515 22:06:15.290] Pod Template:
I0515 22:06:15.291]   Labels:  controller-uid=8b309c6a-45e9-4d2f-b3d2-10892e23a364
I0515 22:06:15.291]            job-name=test-job
I0515 22:06:15.291]            run=pi
I0515 22:06:15.291]   Containers:
I0515 22:06:15.291]    pi:
... skipping 388 lines ...
I0515 22:06:26.062]   selector:
I0515 22:06:26.062]     role: padawan
I0515 22:06:26.062]   sessionAffinity: None
I0515 22:06:26.063]   type: ClusterIP
I0515 22:06:26.063] status:
I0515 22:06:26.063]   loadBalancer: {}
W0515 22:06:26.163] error: you must specify resources by --filename when --local is set.
W0515 22:06:26.164] Example resource specifications include:
W0515 22:06:26.164]    '-f rsrc.yaml'
W0515 22:06:26.164]    '--filename=rsrc.json'
I0515 22:06:26.267] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0515 22:06:26.505] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0515 22:06:26.610] (Bservice "redis-master" deleted
... skipping 107 lines ...
I0515 22:06:34.976] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0515 22:06:35.083] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0515 22:06:35.199] (Bdaemonset.extensions/bind rolled back
I0515 22:06:35.323] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0515 22:06:35.435] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0515 22:06:35.561] (BSuccessful
I0515 22:06:35.562] message:error: unable to find specified revision 1000000 in history
I0515 22:06:35.562] has:unable to find specified revision
I0515 22:06:35.672] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0515 22:06:35.779] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0515 22:06:35.906] (Bdaemonset.extensions/bind rolled back
I0515 22:06:36.030] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0515 22:06:36.140] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 28 lines ...
I0515 22:06:37.915] Namespace:    namespace-1557957996-17375
I0515 22:06:37.915] Selector:     app=guestbook,tier=frontend
I0515 22:06:37.915] Labels:       app=guestbook
I0515 22:06:37.915]               tier=frontend
I0515 22:06:37.915] Annotations:  <none>
I0515 22:06:37.915] Replicas:     3 current / 3 desired
I0515 22:06:37.915] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:06:37.916] Pod Template:
I0515 22:06:37.916]   Labels:  app=guestbook
I0515 22:06:37.916]            tier=frontend
I0515 22:06:37.916]   Containers:
I0515 22:06:37.916]    php-redis:
I0515 22:06:37.916]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0515 22:06:38.061] Namespace:    namespace-1557957996-17375
I0515 22:06:38.062] Selector:     app=guestbook,tier=frontend
I0515 22:06:38.062] Labels:       app=guestbook
I0515 22:06:38.062]               tier=frontend
I0515 22:06:38.062] Annotations:  <none>
I0515 22:06:38.062] Replicas:     3 current / 3 desired
I0515 22:06:38.062] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:06:38.062] Pod Template:
I0515 22:06:38.062]   Labels:  app=guestbook
I0515 22:06:38.062]            tier=frontend
I0515 22:06:38.062]   Containers:
I0515 22:06:38.062]    php-redis:
I0515 22:06:38.063]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0515 22:06:38.198] Namespace:    namespace-1557957996-17375
I0515 22:06:38.198] Selector:     app=guestbook,tier=frontend
I0515 22:06:38.199] Labels:       app=guestbook
I0515 22:06:38.199]               tier=frontend
I0515 22:06:38.199] Annotations:  <none>
I0515 22:06:38.199] Replicas:     3 current / 3 desired
I0515 22:06:38.199] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:06:38.199] Pod Template:
I0515 22:06:38.199]   Labels:  app=guestbook
I0515 22:06:38.199]            tier=frontend
I0515 22:06:38.199]   Containers:
I0515 22:06:38.200]    php-redis:
I0515 22:06:38.200]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0515 22:06:38.343] Namespace:    namespace-1557957996-17375
I0515 22:06:38.343] Selector:     app=guestbook,tier=frontend
I0515 22:06:38.343] Labels:       app=guestbook
I0515 22:06:38.343]               tier=frontend
I0515 22:06:38.344] Annotations:  <none>
I0515 22:06:38.344] Replicas:     3 current / 3 desired
I0515 22:06:38.344] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:06:38.344] Pod Template:
I0515 22:06:38.344]   Labels:  app=guestbook
I0515 22:06:38.344]            tier=frontend
I0515 22:06:38.344]   Containers:
I0515 22:06:38.344]    php-redis:
I0515 22:06:38.344]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0515 22:06:38.523] Namespace:    namespace-1557957996-17375
I0515 22:06:38.524] Selector:     app=guestbook,tier=frontend
I0515 22:06:38.524] Labels:       app=guestbook
I0515 22:06:38.524]               tier=frontend
I0515 22:06:38.524] Annotations:  <none>
I0515 22:06:38.524] Replicas:     3 current / 3 desired
I0515 22:06:38.524] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:06:38.524] Pod Template:
I0515 22:06:38.524]   Labels:  app=guestbook
I0515 22:06:38.524]            tier=frontend
I0515 22:06:38.525]   Containers:
I0515 22:06:38.525]    php-redis:
I0515 22:06:38.525]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0515 22:06:38.662] Namespace:    namespace-1557957996-17375
I0515 22:06:38.662] Selector:     app=guestbook,tier=frontend
I0515 22:06:38.662] Labels:       app=guestbook
I0515 22:06:38.662]               tier=frontend
I0515 22:06:38.662] Annotations:  <none>
I0515 22:06:38.662] Replicas:     3 current / 3 desired
I0515 22:06:38.662] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:06:38.663] Pod Template:
I0515 22:06:38.663]   Labels:  app=guestbook
I0515 22:06:38.663]            tier=frontend
I0515 22:06:38.663]   Containers:
I0515 22:06:38.663]    php-redis:
I0515 22:06:38.663]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0515 22:06:38.801] Namespace:    namespace-1557957996-17375
I0515 22:06:38.801] Selector:     app=guestbook,tier=frontend
I0515 22:06:38.801] Labels:       app=guestbook
I0515 22:06:38.801]               tier=frontend
I0515 22:06:38.801] Annotations:  <none>
I0515 22:06:38.801] Replicas:     3 current / 3 desired
I0515 22:06:38.802] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:06:38.802] Pod Template:
I0515 22:06:38.802]   Labels:  app=guestbook
I0515 22:06:38.802]            tier=frontend
I0515 22:06:38.802]   Containers:
I0515 22:06:38.802]    php-redis:
I0515 22:06:38.802]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0515 22:06:38.934] Namespace:    namespace-1557957996-17375
I0515 22:06:38.934] Selector:     app=guestbook,tier=frontend
I0515 22:06:38.934] Labels:       app=guestbook
I0515 22:06:38.934]               tier=frontend
I0515 22:06:38.934] Annotations:  <none>
I0515 22:06:38.935] Replicas:     3 current / 3 desired
I0515 22:06:38.935] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:06:38.935] Pod Template:
I0515 22:06:38.935]   Labels:  app=guestbook
I0515 22:06:38.935]            tier=frontend
I0515 22:06:38.935]   Containers:
I0515 22:06:38.935]    php-redis:
I0515 22:06:38.935]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
W0515 22:06:39.266] I0515 22:06:39.173980   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557957996-17375", Name:"frontend", UID:"b0a2cdfc-f0a2-4dad-b90a-2314e2a548ac", APIVersion:"v1", ResourceVersion:"1684", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-s8cjv
I0515 22:06:39.367] core.sh:1071: Successful get rc frontend {{.spec.replicas}}: 2
I0515 22:06:39.402] (Bcore.sh:1075: Successful get rc frontend {{.spec.replicas}}: 2
I0515 22:06:39.607] (Bcore.sh:1079: Successful get rc frontend {{.spec.replicas}}: 2
I0515 22:06:39.716] (Bcore.sh:1083: Successful get rc frontend {{.spec.replicas}}: 2
I0515 22:06:39.820] (Breplicationcontroller/frontend scaled
W0515 22:06:39.920] error: Expected replicas to be 3, was 2
W0515 22:06:39.921] I0515 22:06:39.825224   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557957996-17375", Name:"frontend", UID:"b0a2cdfc-f0a2-4dad-b90a-2314e2a548ac", APIVersion:"v1", ResourceVersion:"1691", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-f2cvv
I0515 22:06:40.021] core.sh:1087: Successful get rc frontend {{.spec.replicas}}: 3
I0515 22:06:40.043] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 3
I0515 22:06:40.137] (Breplicationcontroller/frontend scaled
W0515 22:06:40.238] I0515 22:06:40.143765   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557957996-17375", Name:"frontend", UID:"b0a2cdfc-f0a2-4dad-b90a-2314e2a548ac", APIVersion:"v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-f2cvv
I0515 22:06:40.338] core.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
... skipping 41 lines ...
I0515 22:06:42.902] service "expose-test-deployment" deleted
I0515 22:06:43.026] Successful
I0515 22:06:43.026] message:service/expose-test-deployment exposed
I0515 22:06:43.026] has:service/expose-test-deployment exposed
I0515 22:06:43.126] service "expose-test-deployment" deleted
I0515 22:06:43.255] Successful
I0515 22:06:43.256] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0515 22:06:43.256] See 'kubectl expose -h' for help and examples
I0515 22:06:43.256] has:invalid deployment: no selectors
I0515 22:06:43.377] Successful
I0515 22:06:43.377] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0515 22:06:43.377] See 'kubectl expose -h' for help and examples
I0515 22:06:43.378] has:invalid deployment: no selectors
I0515 22:06:43.604] deployment.apps/nginx-deployment created
W0515 22:06:43.705] I0515 22:06:43.610956   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557957996-17375", Name:"nginx-deployment", UID:"3c513abf-5849-4210-936a-70bfb0d10291", APIVersion:"apps/v1", ResourceVersion:"1812", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5cb597d4f to 3
W0515 22:06:43.705] I0515 22:06:43.616634   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957996-17375", Name:"nginx-deployment-5cb597d4f", UID:"d32f6f5b-6dd4-4d8b-804e-3eace23c2d96", APIVersion:"apps/v1", ResourceVersion:"1813", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5cb597d4f-c6w2q
W0515 22:06:43.706] I0515 22:06:43.623153   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957996-17375", Name:"nginx-deployment-5cb597d4f", UID:"d32f6f5b-6dd4-4d8b-804e-3eace23c2d96", APIVersion:"apps/v1", ResourceVersion:"1813", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5cb597d4f-94m5l
... skipping 23 lines ...
I0515 22:06:45.963] service "frontend" deleted
I0515 22:06:45.970] service "frontend-2" deleted
I0515 22:06:45.978] service "frontend-3" deleted
I0515 22:06:45.985] service "frontend-4" deleted
I0515 22:06:45.993] service "frontend-5" deleted
I0515 22:06:46.115] Successful
I0515 22:06:46.115] message:error: cannot expose a Node
I0515 22:06:46.115] has:cannot expose
I0515 22:06:46.223] Successful
I0515 22:06:46.224] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0515 22:06:46.224] has:metadata.name: Invalid value
I0515 22:06:46.336] Successful
I0515 22:06:46.336] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0515 22:06:48.779] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0515 22:06:48.902] core.sh:1259: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0515 22:06:48.995] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0515 22:06:49.103] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0515 22:06:49.216] core.sh:1263: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0515 22:06:49.315] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0515 22:06:49.415] Error: required flag(s) "max" not set
W0515 22:06:49.416] 
W0515 22:06:49.416] 
W0515 22:06:49.416] Examples:
W0515 22:06:49.416]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0515 22:06:49.416]   kubectl autoscale deployment foo --min=2 --max=10
W0515 22:06:49.416]   
... skipping 55 lines ...
I0515 22:06:49.704]           limits:
I0515 22:06:49.704]             cpu: 300m
I0515 22:06:49.705]           requests:
I0515 22:06:49.705]             cpu: 300m
I0515 22:06:49.705]       terminationGracePeriodSeconds: 0
I0515 22:06:49.705] status: {}
W0515 22:06:49.805] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0515 22:06:50.028] deployment.apps/nginx-deployment-resources created
W0515 22:06:50.129] I0515 22:06:50.034648   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557957996-17375", Name:"nginx-deployment-resources", UID:"bad40771-f9f0-48e1-85cf-b555a4dce15d", APIVersion:"apps/v1", ResourceVersion:"1954", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-865b6bb7c6 to 3
W0515 22:06:50.129] I0515 22:06:50.039216   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957996-17375", Name:"nginx-deployment-resources-865b6bb7c6", UID:"9d3ccb57-61ee-4a41-8c84-8ea8e3e566a3", APIVersion:"apps/v1", ResourceVersion:"1955", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-865b6bb7c6-pdh4x
W0515 22:06:50.130] I0515 22:06:50.044242   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957996-17375", Name:"nginx-deployment-resources-865b6bb7c6", UID:"9d3ccb57-61ee-4a41-8c84-8ea8e3e566a3", APIVersion:"apps/v1", ResourceVersion:"1955", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-865b6bb7c6-c4mdt
W0515 22:06:50.130] I0515 22:06:50.044367   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957996-17375", Name:"nginx-deployment-resources-865b6bb7c6", UID:"9d3ccb57-61ee-4a41-8c84-8ea8e3e566a3", APIVersion:"apps/v1", ResourceVersion:"1955", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-865b6bb7c6-gc6zs
I0515 22:06:50.231] core.sh:1278: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 2 lines ...
I0515 22:06:50.527] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0515 22:06:50.628] I0515 22:06:50.535740   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557957996-17375", Name:"nginx-deployment-resources", UID:"bad40771-f9f0-48e1-85cf-b555a4dce15d", APIVersion:"apps/v1", ResourceVersion:"1968", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-69b4c96c9b to 1
W0515 22:06:50.629] I0515 22:06:50.540163   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957996-17375", Name:"nginx-deployment-resources-69b4c96c9b", UID:"3cd404f8-4eae-4a4e-8474-38580eb3788d", APIVersion:"apps/v1", ResourceVersion:"1969", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69b4c96c9b-6ndcc
I0515 22:06:50.730] core.sh:1283: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0515 22:06:50.782] (Bcore.sh:1284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0515 22:06:51.005] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
W0515 22:06:51.106] error: unable to find container named redis
W0515 22:06:51.106] I0515 22:06:51.029905   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557957996-17375", Name:"nginx-deployment-resources", UID:"bad40771-f9f0-48e1-85cf-b555a4dce15d", APIVersion:"apps/v1", ResourceVersion:"1977", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-69b4c96c9b to 0
W0515 22:06:51.107] I0515 22:06:51.036319   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957996-17375", Name:"nginx-deployment-resources-69b4c96c9b", UID:"3cd404f8-4eae-4a4e-8474-38580eb3788d", APIVersion:"apps/v1", ResourceVersion:"1981", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-69b4c96c9b-6ndcc
W0515 22:06:51.107] I0515 22:06:51.050951   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557957996-17375", Name:"nginx-deployment-resources", UID:"bad40771-f9f0-48e1-85cf-b555a4dce15d", APIVersion:"apps/v1", ResourceVersion:"1980", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-7bb7d84c58 to 1
W0515 22:06:51.107] I0515 22:06:51.057051   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557957996-17375", Name:"nginx-deployment-resources-7bb7d84c58", UID:"59cfe345-58fa-4aab-96b0-a33725d9d020", APIVersion:"apps/v1", ResourceVersion:"1987", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-7bb7d84c58-vrmgj
I0515 22:06:51.208] core.sh:1289: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0515 22:06:51.268] (Bcore.sh:1290: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
... skipping 211 lines ...
I0515 22:06:51.859]     status: "True"
I0515 22:06:51.859]     type: Progressing
I0515 22:06:51.859]   observedGeneration: 4
I0515 22:06:51.859]   replicas: 4
I0515 22:06:51.859]   unavailableReplicas: 4
I0515 22:06:51.859]   updatedReplicas: 1
W0515 22:06:51.959] error: you must specify resources by --filename when --local is set.
W0515 22:06:51.960] Example resource specifications include:
W0515 22:06:51.960]    '-f rsrc.yaml'
W0515 22:06:51.960]    '--filename=rsrc.json'
I0515 22:06:52.061] core.sh:1299: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0515 22:06:52.143] (Bcore.sh:1300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0515 22:06:52.250] (Bcore.sh:1301: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0515 22:06:54.015]                 pod-template-hash=75c7695cbd
I0515 22:06:54.015] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0515 22:06:54.016]                 deployment.kubernetes.io/max-replicas: 2
I0515 22:06:54.016]                 deployment.kubernetes.io/revision: 1
I0515 22:06:54.016] Controlled By:  Deployment/test-nginx-apps
I0515 22:06:54.016] Replicas:       1 current / 1 desired
I0515 22:06:54.016] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0515 22:06:54.016] Pod Template:
I0515 22:06:54.016]   Labels:  app=test-nginx-apps
I0515 22:06:54.017]            pod-template-hash=75c7695cbd
I0515 22:06:54.017]   Containers:
I0515 22:06:54.017]    nginx:
I0515 22:06:54.017]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 90 lines ...
I0515 22:06:59.052] (B    Image:	k8s.gcr.io/nginx:test-cmd
I0515 22:06:59.164] apps.sh:296: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0515 22:06:59.285] (Bdeployment.extensions/nginx rolled back
I0515 22:07:00.408] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0515 22:07:00.676] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0515 22:07:00.814] (Bdeployment.extensions/nginx rolled back
W0515 22:07:00.915] error: unable to find specified revision 1000000 in history
I0515 22:07:01.951] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0515 22:07:02.068] (Bdeployment.extensions/nginx paused
W0515 22:07:02.196] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
W0515 22:07:02.303] error: deployments.extensions "nginx" can't restart paused deployment (run rollout resume first)
I0515 22:07:02.430] deployment.extensions/nginx resumed
I0515 22:07:02.573] deployment.extensions/nginx rolled back
I0515 22:07:02.839]     deployment.kubernetes.io/revision-history: 1,3
W0515 22:07:03.040] error: desired revision (3) is different from the running revision (5)
I0515 22:07:03.153] deployment.extensions/nginx restarted
W0515 22:07:03.254] I0515 22:07:03.176657   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557958012-32134", Name:"nginx", UID:"f6e5e666-e380-46fe-ba43-9dac1a668a53", APIVersion:"apps/v1", ResourceVersion:"2200", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-6d5ddf7f58 to 0
W0515 22:07:03.254] I0515 22:07:03.184022   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557958012-32134", Name:"nginx-6d5ddf7f58", UID:"2ec7b377-cfcd-459e-96af-7106adcdf0c1", APIVersion:"apps/v1", ResourceVersion:"2204", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-6d5ddf7f58-9hl4l
W0515 22:07:03.254] I0515 22:07:03.199018   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557958012-32134", Name:"nginx", UID:"f6e5e666-e380-46fe-ba43-9dac1a668a53", APIVersion:"apps/v1", ResourceVersion:"2203", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-858f5fc564 to 1
W0515 22:07:03.255] I0515 22:07:03.204111   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557958012-32134", Name:"nginx-858f5fc564", UID:"1537b37c-e402-48ef-b6c5-428588f8519e", APIVersion:"apps/v1", ResourceVersion:"2210", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-858f5fc564-5x49t
W0515 22:07:03.778] I0515 22:07:03.778068   50944 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1557957996-17375
... skipping 143 lines ...
I0515 22:07:05.724] (Bdeployment.extensions/nginx-deployment image updated
W0515 22:07:05.825] I0515 22:07:05.730925   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557958012-32134", Name:"nginx-deployment", UID:"9bdcb27f-d800-4cbc-916a-cc03a12265d2", APIVersion:"apps/v1", ResourceVersion:"2270", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-64f55cb875 to 1
W0515 22:07:05.825] I0515 22:07:05.736159   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557958012-32134", Name:"nginx-deployment-64f55cb875", UID:"f6506248-dc97-4e9c-b7be-59e7e6ebd341", APIVersion:"apps/v1", ResourceVersion:"2271", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64f55cb875-xdfwz
I0515 22:07:05.926] apps.sh:345: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0515 22:07:05.967] (Bapps.sh:346: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0515 22:07:06.182] (Bdeployment.extensions/nginx-deployment image updated
W0515 22:07:06.283] error: unable to find container named "redis"
I0515 22:07:06.384] apps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0515 22:07:06.410] (Bapps.sh:352: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0515 22:07:06.516] (Bdeployment.apps/nginx-deployment image updated
I0515 22:07:06.633] apps.sh:355: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0515 22:07:06.745] (Bapps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0515 22:07:06.944] (Bapps.sh:359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
... skipping 46 lines ...
W0515 22:07:10.117] I0515 22:07:10.116788   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557958012-32134", Name:"nginx-deployment-57b54775", UID:"fa4fc644-1461-4213-a399-0186fa47f041", APIVersion:"apps/v1", ResourceVersion:"2408", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-57b54775-xmml5
W0515 22:07:10.199] I0515 22:07:10.198872   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557958012-32134", Name:"nginx-deployment", UID:"e9d16aa5-8232-481a-ac7e-e3d5e7472523", APIVersion:"apps/v1", ResourceVersion:"2415", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-5dfd5c49d4 to 0
I0515 22:07:10.300] deployment.extensions/nginx-deployment env updated
I0515 22:07:10.300] deployment.extensions/nginx-deployment env updated
I0515 22:07:10.370] deployment.extensions "nginx-deployment" deleted
W0515 22:07:10.471] I0515 22:07:10.347838   50944 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557958012-32134", Name:"nginx-deployment", UID:"e9d16aa5-8232-481a-ac7e-e3d5e7472523", APIVersion:"apps/v1", ResourceVersion:"2422", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-7d8bf5bf54 to 1
W0515 22:07:10.471] E0515 22:07:10.414423   50944 replica_set.go:450] Sync "namespace-1557958012-32134/nginx-deployment-8656b684b8" failed with replicasets.apps "nginx-deployment-8656b684b8" not found
W0515 22:07:10.472] E0515 22:07:10.464312   50944 replica_set.go:450] Sync "namespace-1557958012-32134/nginx-deployment-5dfd5c49d4" failed with Operation cannot be fulfilled on replicasets.apps "nginx-deployment-5dfd5c49d4": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1557958012-32134/nginx-deployment-5dfd5c49d4, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 25c53c3f-5e32-4f90-b8f3-6c2784eecd36, UID in object meta: 
W0515 22:07:10.567] I0515 22:07:10.566287   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557958012-32134", Name:"nginx-deployment-7d8bf5bf54", UID:"8832dcb8-741b-4d02-8adc-2ef504955e66", APIVersion:"apps/v1", ResourceVersion:"2425", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-7d8bf5bf54-jjq6l
W0515 22:07:10.617] I0515 22:07:10.617113   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557958012-32134", Name:"nginx-deployment-8656b684b8", UID:"6b98dad2-53c8-4e57-9b21-5fdf6c5d2cf9", APIVersion:"apps/v1", ResourceVersion:"2410", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-8656b684b8-8g86k
W0515 22:07:10.661] E0515 22:07:10.661088   50944 replica_set.go:450] Sync "namespace-1557958012-32134/nginx-deployment-57b54775" failed with replicasets.apps "nginx-deployment-57b54775" not found
I0515 22:07:10.762] configmap "test-set-env-config" deleted
I0515 22:07:10.762] secret "test-set-env-secret" deleted
I0515 22:07:10.762] +++ exit code: 0
I0515 22:07:10.762] Recording: run_rs_tests
I0515 22:07:10.763] Running command: run_rs_tests
I0515 22:07:10.785] 
I0515 22:07:10.787] +++ Running case: test-cmd.run_rs_tests 
I0515 22:07:10.792] +++ working dir: /go/src/k8s.io/kubernetes
I0515 22:07:10.795] +++ command: run_rs_tests
I0515 22:07:10.812] +++ [0515 22:07:10] Creating namespace namespace-1557958030-15055
I0515 22:07:10.898] namespace/namespace-1557958030-15055 created
I0515 22:07:10.982] Context "test" modified.
I0515 22:07:10.993] +++ [0515 22:07:10] Testing kubectl(v1:replicasets)
W0515 22:07:11.094] E0515 22:07:10.911112   50944 replica_set.go:450] Sync "namespace-1557958012-32134/nginx-deployment-7d8bf5bf54" failed with replicasets.apps "nginx-deployment-7d8bf5bf54" not found
W0515 22:07:11.094] E0515 22:07:10.961128   50944 replica_set.go:450] Sync "namespace-1557958012-32134/nginx-deployment-8656b684b8" failed with replicasets.apps "nginx-deployment-8656b684b8" not found
I0515 22:07:11.195] apps.sh:510: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:07:11.324] (Breplicaset.apps/frontend created
I0515 22:07:11.347] +++ [0515 22:07:11] Deleting rs
I0515 22:07:11.433] replicaset.extensions "frontend" deleted
W0515 22:07:11.534] I0515 22:07:11.332595   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557958030-15055", Name:"frontend", UID:"1d741f28-b803-42e5-b8e6-b067add70c80", APIVersion:"apps/v1", ResourceVersion:"2455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-5txdf
W0515 22:07:11.534] I0515 22:07:11.336904   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557958030-15055", Name:"frontend", UID:"1d741f28-b803-42e5-b8e6-b067add70c80", APIVersion:"apps/v1", ResourceVersion:"2455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zqh2j
W0515 22:07:11.535] I0515 22:07:11.337427   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557958030-15055", Name:"frontend", UID:"1d741f28-b803-42e5-b8e6-b067add70c80", APIVersion:"apps/v1", ResourceVersion:"2455", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-th59p
W0515 22:07:11.535] E0515 22:07:11.462013   50944 replica_set.go:450] Sync "namespace-1557958030-15055/frontend" failed with Operation cannot be fulfilled on replicasets.apps "frontend": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1557958030-15055/frontend, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 1d741f28-b803-42e5-b8e6-b067add70c80, UID in object meta: 
I0515 22:07:11.636] apps.sh:516: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:07:11.662] (Bapps.sh:520: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:07:11.866] (Breplicaset.apps/frontend-no-cascade created
W0515 22:07:11.966] I0515 22:07:11.872266   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557958030-15055", Name:"frontend-no-cascade", UID:"72039597-51ae-45c6-9690-dc2853f9be45", APIVersion:"apps/v1", ResourceVersion:"2470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-8nfnd
W0515 22:07:11.967] I0515 22:07:11.876665   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557958030-15055", Name:"frontend-no-cascade", UID:"72039597-51ae-45c6-9690-dc2853f9be45", APIVersion:"apps/v1", ResourceVersion:"2470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-85q9k
W0515 22:07:11.967] I0515 22:07:11.876719   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557958030-15055", Name:"frontend-no-cascade", UID:"72039597-51ae-45c6-9690-dc2853f9be45", APIVersion:"apps/v1", ResourceVersion:"2470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-5rncq
I0515 22:07:12.068] apps.sh:526: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0515 22:07:12.068] (B+++ [0515 22:07:12] Deleting rs
I0515 22:07:12.098] replicaset.extensions "frontend-no-cascade" deleted
W0515 22:07:12.199] E0515 22:07:12.115006   50944 replica_set.go:450] Sync "namespace-1557958030-15055/frontend-no-cascade" failed with Operation cannot be fulfilled on replicasets.apps "frontend-no-cascade": the object has been modified; please apply your changes to the latest version and try again
W0515 22:07:12.211] E0515 22:07:12.210612   50944 replica_set.go:450] Sync "namespace-1557958030-15055/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
I0515 22:07:12.312] apps.sh:530: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:07:12.356] (Bapps.sh:532: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0515 22:07:12.463] (Bpod "frontend-no-cascade-5rncq" deleted
I0515 22:07:12.470] pod "frontend-no-cascade-85q9k" deleted
I0515 22:07:12.477] pod "frontend-no-cascade-8nfnd" deleted
W0515 22:07:12.578] I0515 22:07:12.444132   50944 horizontal.go:320] Horizontal Pod Autoscaler nginx-deployment has been deleted in namespace-1557958012-32134
... skipping 9 lines ...
I0515 22:07:13.226] Namespace:    namespace-1557958030-15055
I0515 22:07:13.226] Selector:     app=guestbook,tier=frontend
I0515 22:07:13.226] Labels:       app=guestbook
I0515 22:07:13.226]               tier=frontend
I0515 22:07:13.227] Annotations:  <none>
I0515 22:07:13.227] Replicas:     3 current / 3 desired
I0515 22:07:13.227] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:07:13.227] Pod Template:
I0515 22:07:13.227]   Labels:  app=guestbook
I0515 22:07:13.227]            tier=frontend
I0515 22:07:13.227]   Containers:
I0515 22:07:13.227]    php-redis:
I0515 22:07:13.227]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0515 22:07:13.367] Namespace:    namespace-1557958030-15055
I0515 22:07:13.367] Selector:     app=guestbook,tier=frontend
I0515 22:07:13.367] Labels:       app=guestbook
I0515 22:07:13.368]               tier=frontend
I0515 22:07:13.368] Annotations:  <none>
I0515 22:07:13.368] Replicas:     3 current / 3 desired
I0515 22:07:13.368] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:07:13.368] Pod Template:
I0515 22:07:13.368]   Labels:  app=guestbook
I0515 22:07:13.368]            tier=frontend
I0515 22:07:13.368]   Containers:
I0515 22:07:13.368]    php-redis:
I0515 22:07:13.369]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0515 22:07:13.505] Namespace:    namespace-1557958030-15055
I0515 22:07:13.505] Selector:     app=guestbook,tier=frontend
I0515 22:07:13.505] Labels:       app=guestbook
I0515 22:07:13.505]               tier=frontend
I0515 22:07:13.505] Annotations:  <none>
I0515 22:07:13.505] Replicas:     3 current / 3 desired
I0515 22:07:13.505] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:07:13.506] Pod Template:
I0515 22:07:13.506]   Labels:  app=guestbook
I0515 22:07:13.506]            tier=frontend
I0515 22:07:13.506]   Containers:
I0515 22:07:13.506]    php-redis:
I0515 22:07:13.506]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0515 22:07:13.648] Namespace:    namespace-1557958030-15055
I0515 22:07:13.648] Selector:     app=guestbook,tier=frontend
I0515 22:07:13.648] Labels:       app=guestbook
I0515 22:07:13.648]               tier=frontend
I0515 22:07:13.648] Annotations:  <none>
I0515 22:07:13.649] Replicas:     3 current / 3 desired
I0515 22:07:13.649] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:07:13.649] Pod Template:
I0515 22:07:13.649]   Labels:  app=guestbook
I0515 22:07:13.649]            tier=frontend
I0515 22:07:13.649]   Containers:
I0515 22:07:13.649]    php-redis:
I0515 22:07:13.649]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0515 22:07:13.830] Namespace:    namespace-1557958030-15055
I0515 22:07:13.830] Selector:     app=guestbook,tier=frontend
I0515 22:07:13.830] Labels:       app=guestbook
I0515 22:07:13.830]               tier=frontend
I0515 22:07:13.830] Annotations:  <none>
I0515 22:07:13.830] Replicas:     3 current / 3 desired
I0515 22:07:13.830] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:07:13.830] Pod Template:
I0515 22:07:13.830]   Labels:  app=guestbook
I0515 22:07:13.830]            tier=frontend
I0515 22:07:13.830]   Containers:
I0515 22:07:13.831]    php-redis:
I0515 22:07:13.831]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0515 22:07:13.967] Namespace:    namespace-1557958030-15055
I0515 22:07:13.967] Selector:     app=guestbook,tier=frontend
I0515 22:07:13.967] Labels:       app=guestbook
I0515 22:07:13.967]               tier=frontend
I0515 22:07:13.967] Annotations:  <none>
I0515 22:07:13.967] Replicas:     3 current / 3 desired
I0515 22:07:13.967] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:07:13.967] Pod Template:
I0515 22:07:13.968]   Labels:  app=guestbook
I0515 22:07:13.968]            tier=frontend
I0515 22:07:13.968]   Containers:
I0515 22:07:13.968]    php-redis:
I0515 22:07:13.968]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0515 22:07:14.099] Namespace:    namespace-1557958030-15055
I0515 22:07:14.100] Selector:     app=guestbook,tier=frontend
I0515 22:07:14.100] Labels:       app=guestbook
I0515 22:07:14.100]               tier=frontend
I0515 22:07:14.100] Annotations:  <none>
I0515 22:07:14.100] Replicas:     3 current / 3 desired
I0515 22:07:14.100] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:07:14.100] Pod Template:
I0515 22:07:14.101]   Labels:  app=guestbook
I0515 22:07:14.101]            tier=frontend
I0515 22:07:14.101]   Containers:
I0515 22:07:14.101]    php-redis:
I0515 22:07:14.101]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0515 22:07:14.234] Namespace:    namespace-1557958030-15055
I0515 22:07:14.234] Selector:     app=guestbook,tier=frontend
I0515 22:07:14.234] Labels:       app=guestbook
I0515 22:07:14.234]               tier=frontend
I0515 22:07:14.234] Annotations:  <none>
I0515 22:07:14.234] Replicas:     3 current / 3 desired
I0515 22:07:14.234] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0515 22:07:14.234] Pod Template:
I0515 22:07:14.235]   Labels:  app=guestbook
I0515 22:07:14.235]            tier=frontend
I0515 22:07:14.235]   Containers:
I0515 22:07:14.235]    php-redis:
I0515 22:07:14.235]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 180 lines ...
I0515 22:07:20.742] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0515 22:07:20.875] apps.sh:651: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0515 22:07:20.973] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0515 22:07:21.080] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0515 22:07:21.201] apps.sh:655: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0515 22:07:21.297] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0515 22:07:21.397] Error: required flag(s) "max" not set
W0515 22:07:21.398] 
W0515 22:07:21.398] 
W0515 22:07:21.398] Examples:
W0515 22:07:21.398]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0515 22:07:21.398]   kubectl autoscale deployment foo --min=2 --max=10
W0515 22:07:21.398]   
... skipping 89 lines ...
I0515 22:07:25.261] (Bapps.sh:439: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0515 22:07:25.379] (Bapps.sh:440: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0515 22:07:25.503] (Bstatefulset.apps/nginx rolled back
I0515 22:07:25.622] apps.sh:443: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0515 22:07:25.737] (Bapps.sh:444: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0515 22:07:25.862] (BSuccessful
I0515 22:07:25.862] message:error: unable to find specified revision 1000000 in history
I0515 22:07:25.862] has:unable to find specified revision
I0515 22:07:25.982] apps.sh:448: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0515 22:07:26.098] (Bapps.sh:449: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0515 22:07:26.226] (Bstatefulset.apps/nginx rolled back
I0515 22:07:26.358] apps.sh:452: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0515 22:07:26.483] (Bapps.sh:453: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0515 22:07:28.814] Name:         mock
I0515 22:07:28.814] Namespace:    namespace-1557958047-29324
I0515 22:07:28.814] Selector:     app=mock
I0515 22:07:28.814] Labels:       app=mock
I0515 22:07:28.814] Annotations:  <none>
I0515 22:07:28.815] Replicas:     1 current / 1 desired
I0515 22:07:28.815] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0515 22:07:28.815] Pod Template:
I0515 22:07:28.815]   Labels:  app=mock
I0515 22:07:28.815]   Containers:
I0515 22:07:28.815]    mock-container:
I0515 22:07:28.815]     Image:        k8s.gcr.io/pause:2.0
I0515 22:07:28.815]     Port:         9949/TCP
... skipping 56 lines ...
I0515 22:07:31.556] Name:         mock
I0515 22:07:31.557] Namespace:    namespace-1557958047-29324
I0515 22:07:31.557] Selector:     app=mock
I0515 22:07:31.557] Labels:       app=mock
I0515 22:07:31.557] Annotations:  <none>
I0515 22:07:31.557] Replicas:     1 current / 1 desired
I0515 22:07:31.557] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0515 22:07:31.557] Pod Template:
I0515 22:07:31.557]   Labels:  app=mock
I0515 22:07:31.557]   Containers:
I0515 22:07:31.558]    mock-container:
I0515 22:07:31.558]     Image:        k8s.gcr.io/pause:2.0
I0515 22:07:31.558]     Port:         9949/TCP
... skipping 56 lines ...
I0515 22:07:34.257] Name:         mock
I0515 22:07:34.257] Namespace:    namespace-1557958047-29324
I0515 22:07:34.257] Selector:     app=mock
I0515 22:07:34.258] Labels:       app=mock
I0515 22:07:34.258] Annotations:  <none>
I0515 22:07:34.258] Replicas:     1 current / 1 desired
I0515 22:07:34.258] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0515 22:07:34.258] Pod Template:
I0515 22:07:34.258]   Labels:  app=mock
I0515 22:07:34.258]   Containers:
I0515 22:07:34.258]    mock-container:
I0515 22:07:34.258]     Image:        k8s.gcr.io/pause:2.0
I0515 22:07:34.259]     Port:         9949/TCP
... skipping 42 lines ...
I0515 22:07:36.848] Namespace:    namespace-1557958047-29324
I0515 22:07:36.848] Selector:     app=mock
I0515 22:07:36.848] Labels:       app=mock
I0515 22:07:36.849]               status=replaced
I0515 22:07:36.849] Annotations:  <none>
I0515 22:07:36.849] Replicas:     1 current / 1 desired
I0515 22:07:36.849] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0515 22:07:36.849] Pod Template:
I0515 22:07:36.849]   Labels:  app=mock
I0515 22:07:36.849]   Containers:
I0515 22:07:36.849]    mock-container:
I0515 22:07:36.850]     Image:        k8s.gcr.io/pause:2.0
I0515 22:07:36.850]     Port:         9949/TCP
... skipping 11 lines ...
I0515 22:07:36.858] Namespace:    namespace-1557958047-29324
I0515 22:07:36.858] Selector:     app=mock2
I0515 22:07:36.858] Labels:       app=mock2
I0515 22:07:36.858]               status=replaced
I0515 22:07:36.858] Annotations:  <none>
I0515 22:07:36.859] Replicas:     1 current / 1 desired
I0515 22:07:36.859] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0515 22:07:36.859] Pod Template:
I0515 22:07:36.859]   Labels:  app=mock2
I0515 22:07:36.859]   Containers:
I0515 22:07:36.859]    mock-container:
I0515 22:07:36.859]     Image:        k8s.gcr.io/pause:2.0
I0515 22:07:36.859]     Port:         9949/TCP
... skipping 106 lines ...
I0515 22:07:42.915] +++ [0515 22:07:42] Testing persistent volumes
I0515 22:07:43.020] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:07:43.253] (Bpersistentvolume/pv0001 created
I0515 22:07:43.389] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0515 22:07:43.480] (Bpersistentvolume "pv0001" deleted
I0515 22:07:43.718] persistentvolume/pv0002 created
W0515 22:07:43.820] E0515 22:07:43.722946   50944 pv_protection_controller.go:117] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I0515 22:07:43.921] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0515 22:07:43.937] (Bpersistentvolume "pv0002" deleted
I0515 22:07:44.160] persistentvolume/pv0003 created
W0515 22:07:44.260] E0515 22:07:44.164200   50944 pv_protection_controller.go:117] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I0515 22:07:44.361] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0515 22:07:44.375] (Bpersistentvolume "pv0003" deleted
I0515 22:07:44.502] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0515 22:07:44.715] (Bpersistentvolume/pv0001 created
W0515 22:07:44.816] E0515 22:07:44.720284   50944 pv_protection_controller.go:117] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
I0515 22:07:44.916] storage.sh:45: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0515 22:07:44.960] (BSuccessful
I0515 22:07:44.960] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0515 22:07:44.960] persistentvolume "pv0001" deleted
I0515 22:07:44.960] has:warning: deleting cluster-scoped resources
I0515 22:07:44.963] Successful
... skipping 491 lines ...
I0515 22:07:50.892] yes
I0515 22:07:50.892] has:the server doesn't have a resource type
I0515 22:07:50.982] Successful
I0515 22:07:50.983] message:yes
I0515 22:07:50.983] has:yes
I0515 22:07:51.076] Successful
I0515 22:07:51.076] message:error: --subresource can not be used with NonResourceURL
I0515 22:07:51.076] has:subresource can not be used with NonResourceURL
I0515 22:07:51.177] Successful
I0515 22:07:51.289] Successful
I0515 22:07:51.289] message:yes
I0515 22:07:51.289] 0
I0515 22:07:51.289] has:0
... skipping 39 lines ...
W0515 22:07:52.269] 		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
I0515 22:07:52.370] legacy-script.sh:801: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0515 22:07:52.450] (Blegacy-script.sh:802: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0515 22:07:52.587] (Blegacy-script.sh:803: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0515 22:07:52.728] (Blegacy-script.sh:804: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0515 22:07:52.841] (BSuccessful
I0515 22:07:52.841] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0515 22:07:52.842] has:only rbac.authorization.k8s.io/v1 is supported
I0515 22:07:52.966] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0515 22:07:52.974] role.rbac.authorization.k8s.io "testing-R" deleted
I0515 22:07:52.987] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0515 22:07:53.001] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0515 22:07:53.019] Recording: run_retrieve_multiple_tests
... skipping 33 lines ...
I0515 22:07:54.696] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0515 22:07:54.701] +++ working dir: /go/src/k8s.io/kubernetes
I0515 22:07:54.706] +++ command: run_kubectl_explain_tests
I0515 22:07:54.719] +++ [0515 22:07:54] Testing kubectl(v1:explain)
W0515 22:07:54.820] I0515 22:07:54.528680   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557958073-2099", Name:"cassandra", UID:"8d50cc64-eb68-46ed-95e0-bd8d6913104d", APIVersion:"v1", ResourceVersion:"3042", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-6hglp
W0515 22:07:54.821] I0515 22:07:54.546762   50944 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557958073-2099", Name:"cassandra", UID:"8d50cc64-eb68-46ed-95e0-bd8d6913104d", APIVersion:"v1", ResourceVersion:"3042", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-6jsmk
W0515 22:07:54.821] E0515 22:07:54.552534   50944 replica_set.go:450] Sync "namespace-1557958073-2099/cassandra" failed with replicationcontrollers "cassandra" not found
I0515 22:07:54.949] KIND:     Pod
I0515 22:07:54.949] VERSION:  v1
I0515 22:07:54.949] 
I0515 22:07:54.949] DESCRIPTION:
I0515 22:07:54.949]      Pod is a collection of containers that can run on a host. This resource is
I0515 22:07:54.949]      created by clients and scheduled onto hosts.
... skipping 977 lines ...
I0515 22:08:27.608] message:node/127.0.0.1 already uncordoned (dry run)
I0515 22:08:27.608] has:already uncordoned
I0515 22:08:27.718] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0515 22:08:27.811] (Bnode/127.0.0.1 labeled
I0515 22:08:27.936] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0515 22:08:28.025] (BSuccessful
I0515 22:08:28.026] message:error: cannot specify both a node name and a --selector option
I0515 22:08:28.026] See 'kubectl drain -h' for help and examples
I0515 22:08:28.026] has:cannot specify both a node name
I0515 22:08:28.113] Successful
I0515 22:08:28.113] message:error: USAGE: cordon NODE [flags]
I0515 22:08:28.113] See 'kubectl cordon -h' for help and examples
I0515 22:08:28.113] has:error\: USAGE\: cordon NODE
I0515 22:08:28.206] node/127.0.0.1 already uncordoned
I0515 22:08:28.302] Successful
I0515 22:08:28.303] message:error: You must provide one or more resources by argument or filename.
I0515 22:08:28.303] Example resource specifications include:
I0515 22:08:28.303]    '-f rsrc.yaml'
I0515 22:08:28.303]    '--filename=rsrc.json'
I0515 22:08:28.303]    '<resource> <name>'
I0515 22:08:28.304]    '<resource>'
I0515 22:08:28.304] has:must provide one or more resources
... skipping 15 lines ...
I0515 22:08:28.840] Successful
I0515 22:08:28.841] message:The following compatible plugins are available:
I0515 22:08:28.841] 
I0515 22:08:28.841] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0515 22:08:28.841]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0515 22:08:28.841] 
I0515 22:08:28.841] error: one plugin warning was found
I0515 22:08:28.842] has:kubectl-version overwrites existing command: "kubectl version"
I0515 22:08:28.926] Successful
I0515 22:08:28.927] message:The following compatible plugins are available:
I0515 22:08:28.927] 
I0515 22:08:28.927] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0515 22:08:28.927] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0515 22:08:28.927]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0515 22:08:28.927] 
I0515 22:08:28.927] error: one plugin warning was found
I0515 22:08:28.928] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0515 22:08:29.020] Successful
I0515 22:08:29.020] message:The following compatible plugins are available:
I0515 22:08:29.020] 
I0515 22:08:29.020] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0515 22:08:29.021] has:plugins are available
I0515 22:08:29.113] Successful
I0515 22:08:29.113] message:Unable read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
I0515 22:08:29.113] error: unable to find any kubectl plugins in your PATH
I0515 22:08:29.114] has:unable to find any kubectl plugins in your PATH
I0515 22:08:29.196] Successful
I0515 22:08:29.197] message:I am plugin foo
I0515 22:08:29.197] has:plugin foo
I0515 22:08:29.285] Successful
I0515 22:08:29.285] message:Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.0-alpha.0.74+a50e7f62f8e8df", GitCommit:"a50e7f62f8e8df59dee25fea7b14b22190f6671c", GitTreeState:"clean", BuildDate:"2019-05-15T22:00:51Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0515 22:08:29.398] 
I0515 22:08:29.400] +++ Running case: test-cmd.run_impersonation_tests 
I0515 22:08:29.404] +++ working dir: /go/src/k8s.io/kubernetes
I0515 22:08:29.407] +++ command: run_impersonation_tests
I0515 22:08:29.421] +++ [0515 22:08:29] Testing impersonation
I0515 22:08:29.506] Successful
I0515 22:08:29.506] message:error: requesting groups or user-extra for  without impersonating a user
I0515 22:08:29.507] has:without impersonating a user
I0515 22:08:29.729] certificatesigningrequest.certificates.k8s.io/foo created
I0515 22:08:29.856] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0515 22:08:29.960] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0515 22:08:30.055] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0515 22:08:30.290] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 150 lines ...
W0515 22:08:33.706] I0515 22:08:33.689847   47613 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0515 22:08:33.706] I0515 22:08:33.689917   47613 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0515 22:08:33.707] I0515 22:08:33.689929   47613 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
W0515 22:08:33.707] I0515 22:08:33.689938   47613 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0515 22:08:33.707] I0515 22:08:33.689953   47613 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0515 22:08:33.707] I0515 22:08:33.689992   47613 clientconn.go:1016] blockingPicker: the picked transport is not ready, loop back to repick
W0515 22:08:33.707] E0515 22:08:33.690086   47613 controller.go:179] rpc error: code = Unavailable desc = transport is closing
W0515 22:08:33.708] I0515 22:08:33.690157   47613 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0515 22:08:33.790] + make test-integration
I0515 22:08:33.891] No resources found
I0515 22:08:33.891] No resources found
I0515 22:08:33.892] +++ [0515 22:08:33] TESTS PASSED
I0515 22:08:33.892] junit report dir: /workspace/artifacts
... skipping 6 lines ...
I0515 22:08:39.818] +++ [0515 22:08:39] On try 2, etcd: : http://127.0.0.1:2379
I0515 22:08:39.830] {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
I0515 22:08:39.836] +++ [0515 22:08:39] Running integration test cases
I0515 22:08:45.246] Running tests for APIVersion: v1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1beta1,apps/v1,apps/v1beta1,apps/v1beta2,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,autoscaling/v2beta2,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,coordination.k8s.io/v1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,networking.k8s.io/v1beta1,node.k8s.io/v1alpha1,node.k8s.io/v1beta1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,scheduling.k8s.io/v1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,
I0515 22:08:45.294] +++ [0515 22:08:45] Running tests without code coverage
W0515 22:10:36.086] # k8s.io/kubernetes/test/integration/daemonset [k8s.io/kubernetes/test/integration/daemonset.test]
W0515 22:10:36.087] test/integration/daemonset/daemonset_test.go:129:41: undefined: clientv1core
I0515 22:22:05.827] ok  	k8s.io/kubernetes/test/integration/apimachinery	277.024s
I0515 22:22:05.828] ok  	k8s.io/kubernetes/test/integration/apiserver	82.474s
I0515 22:22:05.828] ok  	k8s.io/kubernetes/test/integration/apiserver/admissionwebhook	67.226s
I0515 22:22:05.828] ok  	k8s.io/kubernetes/test/integration/apiserver/apply	53.748s
I0515 22:22:05.828] ok  	k8s.io/kubernetes/test/integration/auth	122.477s
I0515 22:22:05.829] ok  	k8s.io/kubernetes/test/integration/client	51.423s
I0515 22:22:05.829] ok  	k8s.io/kubernetes/test/integration/configmap	3.935s
I0515 22:22:05.829] ok  	k8s.io/kubernetes/test/integration/cronjob	34.901s
I0515 22:22:05.829] FAIL	k8s.io/kubernetes/test/integration/daemonset [build failed]
I0515 22:22:05.829] ok  	k8s.io/kubernetes/test/integration/defaulttolerationseconds	4.001s
I0515 22:22:05.829] ok  	k8s.io/kubernetes/test/integration/deployment	214.767s
I0515 22:22:05.829] ok  	k8s.io/kubernetes/test/integration/dryrun	32.539s
I0515 22:22:05.829] ok  	k8s.io/kubernetes/test/integration/etcd	23.611s
I0515 22:22:05.829] ok  	k8s.io/kubernetes/test/integration/evictions	14.582s
I0515 22:22:05.829] ok  	k8s.io/kubernetes/test/integration/examples	17.942s
... skipping 21 lines ...
I0515 22:22:05.832] ok  	k8s.io/kubernetes/test/integration/storageclasses	3.680s
I0515 22:22:05.832] ok  	k8s.io/kubernetes/test/integration/tls	7.006s
I0515 22:22:05.832] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	10.165s
I0515 22:22:05.833] ok  	k8s.io/kubernetes/test/integration/volume	93.570s
I0515 22:22:05.833] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	196.575s
I0515 22:22:20.952] +++ [0515 22:22:20] Saved JUnit XML test report to /workspace/artifacts/junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190515-220845.xml
I0515 22:22:20.958] Makefile:185: recipe for target 'test' failed
I0515 22:22:20.974] +++ [0515 22:22:20] Cleaning up etcd
W0515 22:22:21.075] make[1]: *** [test] Error 1
W0515 22:22:21.075] !!! [0515 22:22:20] Call tree:
W0515 22:22:21.076] !!! [0515 22:22:20]  1: hack/make-rules/test-integration.sh:102 runTests(...)
I0515 22:22:21.614] +++ [0515 22:22:21] Integration test cleanup complete
I0515 22:22:21.615] Makefile:204: recipe for target 'test-integration' failed
W0515 22:22:21.716] make: *** [test-integration] Error 1
W0515 22:22:27.996] Traceback (most recent call last):
W0515 22:22:27.997]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0515 22:22:27.997]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0515 22:22:27.997]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0515 22:22:27.997]     check(*cmd)
W0515 22:22:27.997]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0515 22:22:27.997]     subprocess.check_call(cmd)
W0515 22:22:27.997]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0515 22:22:28.021]     raise CalledProcessError(retcode, cmd)
W0515 22:22:28.022] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.14-v20190318-2ac98e338', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0515 22:22:28.030] Command failed
I0515 22:22:28.030] process 667 exited with code 1 after 28.5m
E0515 22:22:28.031] FAIL: pull-kubernetes-integration
I0515 22:22:28.031] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0515 22:22:28.761] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0515 22:22:28.829] process 112002 exited with code 0 after 0.0m
I0515 22:22:28.830] Call:  gcloud config get-value account
I0515 22:22:29.214] process 112014 exited with code 0 after 0.0m
I0515 22:22:29.214] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0515 22:22:29.214] Upload result and artifacts...
I0515 22:22:29.214] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73835/pull-kubernetes-integration/1128780288705433602
I0515 22:22:29.215] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73835/pull-kubernetes-integration/1128780288705433602/artifacts
W0515 22:22:30.541] CommandException: One or more URLs matched no objects.
E0515 22:22:30.722] Command failed
I0515 22:22:30.723] process 112026 exited with code 1 after 0.0m
W0515 22:22:30.723] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73835/pull-kubernetes-integration/1128780288705433602/artifacts not exist yet
I0515 22:22:30.723] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73835/pull-kubernetes-integration/1128780288705433602/artifacts
I0515 22:22:36.825] process 112168 exited with code 0 after 0.1m
W0515 22:22:36.825] metadata path /workspace/_artifacts/metadata.json does not exist
W0515 22:22:36.825] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...