This job view page is being replaced by Spyglass soon. Check out the new job view.
PRchardch: E2E test for GPU job interrupted by node recreate
ResultFAILURE
Tests 1 failed / 1398 succeeded
Started2019-05-15 23:50
Elapsed29m35s
Revision
Buildergke-prow-containerd-pool-99179761-jh56
Refs master:aaec77a9
76401:18b61fe2
pod21104b46-776c-11e9-963a-0a580a6c053c
infra-commit3350b5955
pod21104b46-776c-11e9-963a-0a580a6c053c
repok8s.io/kubernetes
repo-commit16e2d5fc3764db426f4611304dd897ff308bb76a
repos{u'k8s.io/kubernetes': u'master:aaec77a94b67878ca1bdd884f2778f4388d203f2,76401:18b61fe2371ecccf2c958a2299421721c9704c73'}

Test Failures


k8s.io/kubernetes/test/integration/auth [build failed] 0.00s

k8s.io/kubernetes/test/integration/auth [build failed]
from junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190516-000546.xml

Show 1398 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 318 lines ...
W0515 23:59:42.265] I0515 23:59:42.264537   47716 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0515 23:59:42.265] I0515 23:59:42.264623   47716 server.go:558] external host was not specified, using 172.17.0.2
W0515 23:59:42.266] W0515 23:59:42.264674   47716 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0515 23:59:42.266] I0515 23:59:42.265262   47716 server.go:145] Version: v1.16.0-alpha.0.61+16e2d5fc3764db
W0515 23:59:42.753] I0515 23:59:42.752892   47716 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0515 23:59:42.754] I0515 23:59:42.752934   47716 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0515 23:59:42.754] E0515 23:59:42.753443   47716 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:42.754] E0515 23:59:42.753489   47716 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:42.755] E0515 23:59:42.753513   47716 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:42.755] E0515 23:59:42.753550   47716 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:42.755] E0515 23:59:42.753578   47716 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:42.756] E0515 23:59:42.753631   47716 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:42.756] E0515 23:59:42.753664   47716 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:42.756] E0515 23:59:42.753682   47716 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:42.757] E0515 23:59:42.753748   47716 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:42.757] E0515 23:59:42.753793   47716 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:42.757] E0515 23:59:42.753823   47716 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:42.757] E0515 23:59:42.753862   47716 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:42.758] I0515 23:59:42.753902   47716 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0515 23:59:42.758] I0515 23:59:42.753909   47716 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0515 23:59:42.758] I0515 23:59:42.755334   47716 client.go:354] parsed scheme: ""
W0515 23:59:42.758] I0515 23:59:42.755357   47716 client.go:354] scheme "" not registered, fallback to default scheme
W0515 23:59:42.759] I0515 23:59:42.755414   47716 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0515 23:59:42.759] I0515 23:59:42.755469   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 361 lines ...
W0515 23:59:43.342] W0515 23:59:43.341633   47716 genericapiserver.go:347] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0515 23:59:43.752] I0515 23:59:43.751621   47716 client.go:354] parsed scheme: ""
W0515 23:59:43.752] I0515 23:59:43.751715   47716 client.go:354] scheme "" not registered, fallback to default scheme
W0515 23:59:43.752] I0515 23:59:43.751795   47716 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0515 23:59:43.753] I0515 23:59:43.753370   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0515 23:59:43.754] I0515 23:59:43.753944   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0515 23:59:44.279] E0515 23:59:44.278612   47716 prometheus.go:55] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:44.279] E0515 23:59:44.278682   47716 prometheus.go:68] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:44.280] E0515 23:59:44.278717   47716 prometheus.go:82] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:44.280] E0515 23:59:44.278742   47716 prometheus.go:96] failed to register workDuration metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:44.280] E0515 23:59:44.278768   47716 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:44.280] E0515 23:59:44.278801   47716 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:44.281] E0515 23:59:44.278815   47716 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:44.281] E0515 23:59:44.278841   47716 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:44.281] E0515 23:59:44.278889   47716 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:44.281] E0515 23:59:44.278930   47716 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:44.281] E0515 23:59:44.278988   47716 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:44.282] E0515 23:59:44.279008   47716 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0515 23:59:44.282] I0515 23:59:44.279029   47716 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0515 23:59:44.282] I0515 23:59:44.279034   47716 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0515 23:59:44.282] I0515 23:59:44.280252   47716 client.go:354] parsed scheme: ""
W0515 23:59:44.282] I0515 23:59:44.280279   47716 client.go:354] scheme "" not registered, fallback to default scheme
W0515 23:59:44.282] I0515 23:59:44.280316   47716 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0515 23:59:44.283] I0515 23:59:44.280357   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 69 lines ...
W0516 00:00:28.719] I0516 00:00:28.717760   51063 gc_controller.go:76] Starting GC controller
W0516 00:00:28.719] I0516 00:00:28.717764   51063 controller_utils.go:1029] Waiting for caches to sync for certificate controller
W0516 00:00:28.719] I0516 00:00:28.717788   51063 ttl_controller.go:116] Starting TTL controller
W0516 00:00:28.719] I0516 00:00:28.717806   51063 controller_utils.go:1029] Waiting for caches to sync for GC controller
W0516 00:00:28.719] I0516 00:00:28.717813   51063 controller_utils.go:1029] Waiting for caches to sync for TTL controller
W0516 00:00:28.719] I0516 00:00:28.717822   51063 node_lifecycle_controller.go:77] Sending events to api server
W0516 00:00:28.719] E0516 00:00:28.717888   51063 core.go:160] failed to start cloud node lifecycle controller: no cloud provider provided
W0516 00:00:28.720] W0516 00:00:28.717897   51063 controllermanager.go:515] Skipping "cloud-node-lifecycle"
W0516 00:00:28.720] I0516 00:00:28.718903   51063 controllermanager.go:523] Started "daemonset"
W0516 00:00:28.720] I0516 00:00:28.719153   51063 daemon_controller.go:267] Starting daemon sets controller
W0516 00:00:28.720] I0516 00:00:28.719306   51063 controller_utils.go:1029] Waiting for caches to sync for daemon sets controller
W0516 00:00:28.720] I0516 00:00:28.719917   51063 controllermanager.go:523] Started "replicaset"
W0516 00:00:28.720] W0516 00:00:28.719952   51063 controllermanager.go:515] Skipping "nodeipam"
W0516 00:00:28.720] W0516 00:00:28.719961   51063 controllermanager.go:515] Skipping "ttl-after-finished"
W0516 00:00:28.721] I0516 00:00:28.721195   51063 replica_set.go:182] Starting replicaset controller
W0516 00:00:28.721] I0516 00:00:28.721418   51063 controller_utils.go:1029] Waiting for caches to sync for ReplicaSet controller
W0516 00:00:28.729] I0516 00:00:28.727885   51063 controllermanager.go:523] Started "namespace"
W0516 00:00:28.729] I0516 00:00:28.728152   51063 controllermanager.go:523] Started "csrcleaner"
W0516 00:00:28.730] I0516 00:00:28.728949   51063 namespace_controller.go:186] Starting namespace controller
W0516 00:00:28.730] I0516 00:00:28.728971   51063 controller_utils.go:1029] Waiting for caches to sync for namespace controller
W0516 00:00:28.730] E0516 00:00:28.728975   51063 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0516 00:00:28.730] W0516 00:00:28.728992   51063 controllermanager.go:515] Skipping "service"
W0516 00:00:28.730] I0516 00:00:28.728997   51063 cleaner.go:81] Starting CSR cleaner controller
W0516 00:00:28.731] I0516 00:00:28.729004   51063 core.go:170] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0516 00:00:28.731] W0516 00:00:28.729010   51063 controllermanager.go:515] Skipping "route"
W0516 00:00:28.731] I0516 00:00:28.729876   51063 controllermanager.go:523] Started "persistentvolume-binder"
W0516 00:00:28.732] I0516 00:00:28.729889   51063 pv_controller_base.go:271] Starting persistent volume controller
... skipping 76 lines ...
W0516 00:00:29.357] I0516 00:00:29.357144   51063 controllermanager.go:523] Started "pvc-protection"
W0516 00:00:29.357] I0516 00:00:29.357356   51063 pvc_protection_controller.go:100] Starting PVC protection controller
W0516 00:00:29.358] I0516 00:00:29.357383   51063 controller_utils.go:1029] Waiting for caches to sync for PVC protection controller
W0516 00:00:29.358] I0516 00:00:29.358538   51063 controllermanager.go:523] Started "pv-protection"
W0516 00:00:29.359] I0516 00:00:29.358592   51063 pv_protection_controller.go:82] Starting PV protection controller
W0516 00:00:29.359] I0516 00:00:29.358746   51063 controller_utils.go:1029] Waiting for caches to sync for PV protection controller
W0516 00:00:29.389] W0516 00:00:29.388830   51063 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0516 00:00:29.418] I0516 00:00:29.417952   51063 controller_utils.go:1036] Caches are synced for ReplicationController controller
W0516 00:00:29.418] I0516 00:00:29.417952   51063 controller_utils.go:1036] Caches are synced for TTL controller
W0516 00:00:29.419] I0516 00:00:29.417963   51063 controller_utils.go:1036] Caches are synced for expand controller
W0516 00:00:29.419] I0516 00:00:29.417983   51063 controller_utils.go:1036] Caches are synced for certificate controller
W0516 00:00:29.419] I0516 00:00:29.418013   51063 controller_utils.go:1036] Caches are synced for GC controller
W0516 00:00:29.429] I0516 00:00:29.429254   51063 controller_utils.go:1036] Caches are synced for namespace controller
... skipping 45 lines ...
I0516 00:00:30.132] Successful: --output json has correct server info
I0516 00:00:30.132] +++ [0516 00:00:30] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
W0516 00:00:30.252] I0516 00:00:30.252178   51063 controller_utils.go:1036] Caches are synced for resource quota controller
W0516 00:00:30.341] I0516 00:00:30.340727   51063 controller_utils.go:1036] Caches are synced for garbage collector controller
W0516 00:00:30.341] I0516 00:00:30.340769   51063 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
W0516 00:00:30.349] I0516 00:00:30.349082   51063 controller_utils.go:1036] Caches are synced for ClusterRoleAggregator controller
W0516 00:00:30.360] E0516 00:00:30.359602   51063 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0516 00:00:30.360] E0516 00:00:30.359729   51063 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0516 00:00:30.366] E0516 00:00:30.365744   51063 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0516 00:00:30.378] E0516 00:00:30.377571   51063 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0516 00:00:30.478] Successful: --client --output json has correct client info
I0516 00:00:30.479] Successful: --client --output json has no server info
I0516 00:00:30.479] +++ [0516 00:00:30] Testing kubectl version: compare json output using additional --short flag
I0516 00:00:30.479] Successful: --short --output client json info is equal to non short result
I0516 00:00:30.479] Successful: --short --output server json info is equal to non short result
I0516 00:00:30.479] +++ [0516 00:00:30] Testing kubectl version: compare json output with yaml output
... skipping 49 lines ...
I0516 00:00:33.279] +++ working dir: /go/src/k8s.io/kubernetes
I0516 00:00:33.282] +++ command: run_RESTMapper_evaluation_tests
I0516 00:00:33.294] +++ [0516 00:00:33] Creating namespace namespace-1557964833-541
I0516 00:00:33.366] namespace/namespace-1557964833-541 created
I0516 00:00:33.444] Context "test" modified.
I0516 00:00:33.453] +++ [0516 00:00:33] Testing RESTMapper
I0516 00:00:33.560] +++ [0516 00:00:33] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0516 00:00:33.578] +++ exit code: 0
I0516 00:00:33.703] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0516 00:00:33.704] bindings                                                                      true         Binding
I0516 00:00:33.704] componentstatuses                 cs                                          false        ComponentStatus
I0516 00:00:33.704] configmaps                        cm                                          true         ConfigMap
I0516 00:00:33.705] endpoints                         ep                                          true         Endpoints
... skipping 661 lines ...
I0516 00:00:55.657] poddisruptionbudget.policy/test-pdb-3 created
I0516 00:00:55.757] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0516 00:00:55.834] poddisruptionbudget.policy/test-pdb-4 created
I0516 00:00:55.934] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0516 00:00:56.106] core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:00:56.336] pod/env-test-pod created
W0516 00:00:56.437] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0516 00:00:56.437] error: setting 'all' parameter but found a non empty selector. 
W0516 00:00:56.438] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0516 00:00:56.438] I0516 00:00:55.293378   47716 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0516 00:00:56.438] error: min-available and max-unavailable cannot be both specified
I0516 00:00:56.538] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0516 00:00:56.539] Name:         env-test-pod
I0516 00:00:56.539] Namespace:    test-kubectl-describe-pod
I0516 00:00:56.539] Priority:     0
I0516 00:00:56.539] Node:         <none>
I0516 00:00:56.539] Labels:       <none>
... skipping 143 lines ...
I0516 00:01:09.033] service "modified" deleted
I0516 00:01:09.121] replicationcontroller "modified" deleted
I0516 00:01:09.432] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:01:09.626] pod/valid-pod created
I0516 00:01:09.742] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:01:09.904] Successful
I0516 00:01:09.904] message:Error from server: cannot restore map from string
I0516 00:01:09.904] has:cannot restore map from string
I0516 00:01:09.997] Successful
I0516 00:01:09.997] message:pod/valid-pod patched (no change)
I0516 00:01:09.997] has:patched (no change)
I0516 00:01:10.091] pod/valid-pod patched
I0516 00:01:10.195] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0516 00:01:10.300] core.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
I0516 00:01:10.379] pod/valid-pod patched
I0516 00:01:10.481] core.sh:461: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx2:
I0516 00:01:10.565] pod/valid-pod patched
W0516 00:01:10.665] E0516 00:01:09.894315   47716 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0516 00:01:10.766] core.sh:465: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0516 00:01:10.766] pod/valid-pod patched
I0516 00:01:10.870] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0516 00:01:10.949] pod/valid-pod patched
I0516 00:01:11.054] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0516 00:01:11.223] pod/valid-pod patched
I0516 00:01:11.331] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0516 00:01:11.517] +++ [0516 00:01:11] "kubectl patch with resourceVersion 501" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0516 00:01:11.779] pod "valid-pod" deleted
I0516 00:01:11.790] pod/valid-pod replaced
I0516 00:01:11.906] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0516 00:01:12.115] Successful
I0516 00:01:12.115] message:error: --grace-period must have --force specified
I0516 00:01:12.115] has:\-\-grace-period must have \-\-force specified
I0516 00:01:12.318] Successful
I0516 00:01:12.318] message:error: --timeout must have --force specified
I0516 00:01:12.318] has:\-\-timeout must have \-\-force specified
I0516 00:01:12.509] node/node-v1-test created
W0516 00:01:12.610] W0516 00:01:12.508703   51063 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0516 00:01:12.722] node/node-v1-test replaced
I0516 00:01:12.840] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0516 00:01:12.922] node "node-v1-test" deleted
I0516 00:01:13.032] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0516 00:01:13.366] core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0516 00:01:14.601] core.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 26 lines ...
I0516 00:01:16.091] pod/redis-master created
I0516 00:01:16.095] pod/valid-pod created
W0516 00:01:16.196] Edit cancelled, no changes made.
W0516 00:01:16.196] Edit cancelled, no changes made.
W0516 00:01:16.196] Edit cancelled, no changes made.
W0516 00:01:16.196] Edit cancelled, no changes made.
W0516 00:01:16.197] error: 'name' already has a value (valid-pod), and --overwrite is false
W0516 00:01:16.197] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0516 00:01:16.298] core.sh:614: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0516 00:01:16.313] core.sh:618: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: redis-master:valid-pod:
I0516 00:01:16.397] pod "redis-master" deleted
I0516 00:01:16.403] pod "valid-pod" deleted
I0516 00:01:16.506] core.sh:622: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 72 lines ...
I0516 00:01:23.224] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0516 00:01:23.226] +++ working dir: /go/src/k8s.io/kubernetes
I0516 00:01:23.229] +++ command: run_kubectl_create_error_tests
I0516 00:01:23.240] +++ [0516 00:01:23] Creating namespace namespace-1557964883-29072
I0516 00:01:23.317] namespace/namespace-1557964883-29072 created
I0516 00:01:23.390] Context "test" modified.
I0516 00:01:23.398] +++ [0516 00:01:23] Testing kubectl create with error
W0516 00:01:23.499] Error: must specify one of -f and -k
W0516 00:01:23.499] 
W0516 00:01:23.499] Create a resource from a file or from stdin.
W0516 00:01:23.499] 
W0516 00:01:23.499]  JSON and YAML formats are accepted.
W0516 00:01:23.499] 
W0516 00:01:23.499] Examples:
... skipping 41 lines ...
W0516 00:01:23.505] 
W0516 00:01:23.505] Usage:
W0516 00:01:23.506]   kubectl create -f FILENAME [options]
W0516 00:01:23.506] 
W0516 00:01:23.506] Use "kubectl <command> --help" for more information about a given command.
W0516 00:01:23.506] Use "kubectl options" for a list of global command-line options (applies to all commands).
I0516 00:01:23.692] +++ [0516 00:01:23] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0516 00:01:23.792] kubectl convert is DEPRECATED and will be removed in a future version.
W0516 00:01:23.793] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0516 00:01:23.893] +++ exit code: 0
I0516 00:01:23.917] Recording: run_kubectl_apply_tests
I0516 00:01:23.918] Running command: run_kubectl_apply_tests
I0516 00:01:23.938] 
... skipping 92 lines ...
I0516 00:01:28.622] message:k8s.gcr.io/perl
I0516 00:01:28.622] has not:custom-image
I0516 00:01:28.624] Successful
I0516 00:01:28.624] message:k8s.gcr.io/perl
I0516 00:01:28.624] has:k8s.gcr.io/perl
I0516 00:01:28.714] cronjob.batch/pi image updated
W0516 00:01:28.814] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
W0516 00:01:28.815] kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0516 00:01:28.815] I0516 00:01:27.012294   47716 controller.go:606] quota admission added evaluator for: jobs.batch
W0516 00:01:28.815] I0516 00:01:27.029032   51063 event.go:258] Event(v1.ObjectReference{Kind:"Job", Namespace:"namespace-1557964886-27437", Name:"pi", UID:"05cbab73-d900-478d-b64b-274bebb6ed9a", APIVersion:"batch/v1", ResourceVersion:"510", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: pi-m2jdp
W0516 00:01:28.816] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0516 00:01:28.816] I0516 00:01:27.603724   47716 controller.go:606] quota admission added evaluator for: deployments.apps
W0516 00:01:28.816] I0516 00:01:27.625247   47716 controller.go:606] quota admission added evaluator for: replicasets.apps
... skipping 23 lines ...
I0516 00:01:29.165] Context "test" modified.
I0516 00:01:29.173] +++ [0516 00:01:29] Testing kubectl create filter
I0516 00:01:29.269] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:01:29.483] pod/selector-test-pod created
I0516 00:01:29.598] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0516 00:01:29.691] Successful
I0516 00:01:29.691] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0516 00:01:29.691] has:pods "selector-test-pod-dont-apply" not found
I0516 00:01:29.776] pod "selector-test-pod" deleted
I0516 00:01:29.800] +++ exit code: 0
I0516 00:01:29.846] Recording: run_kubectl_apply_deployments_tests
I0516 00:01:29.846] Running command: run_kubectl_apply_deployments_tests
I0516 00:01:29.867] 
... skipping 26 lines ...
I0516 00:01:31.725] apps.sh:131: Successful get deployments my-depl {{.metadata.labels.l2}}: l2
I0516 00:01:31.812] deployment.extensions "my-depl" deleted
I0516 00:01:31.819] replicaset.extensions "my-depl-588655868c" deleted
I0516 00:01:31.824] replicaset.extensions "my-depl-69cd868dd5" deleted
I0516 00:01:31.833] pod "my-depl-588655868c-drnkv" deleted
I0516 00:01:31.844] pod "my-depl-69cd868dd5-mk6b9" deleted
W0516 00:01:31.944] E0516 00:01:31.846592   51063 replica_set.go:450] Sync "namespace-1557964889-140/my-depl-69cd868dd5" failed with replicasets.apps "my-depl-69cd868dd5" not found
W0516 00:01:31.945] I0516 00:01:31.846634   47716 controller.go:606] quota admission added evaluator for: replicasets.extensions
I0516 00:01:32.045] apps.sh:137: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:01:32.076] apps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:01:32.166] apps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:01:32.254] apps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:01:32.443] deployment.extensions/nginx created
W0516 00:01:32.544] I0516 00:01:32.448889   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557964889-140", Name:"nginx", UID:"348664eb-3e47-4bb2-b928-ae5155ea3ac7", APIVersion:"apps/v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8c9ccf86d to 3
W0516 00:01:32.544] I0516 00:01:32.454819   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964889-140", Name:"nginx-8c9ccf86d", UID:"9fdcb56f-cdbf-420d-934a-bb95505e7f7b", APIVersion:"apps/v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-5pbq8
W0516 00:01:32.545] I0516 00:01:32.461090   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964889-140", Name:"nginx-8c9ccf86d", UID:"9fdcb56f-cdbf-420d-934a-bb95505e7f7b", APIVersion:"apps/v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-wmh58
W0516 00:01:32.545] I0516 00:01:32.461789   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964889-140", Name:"nginx-8c9ccf86d", UID:"9fdcb56f-cdbf-420d-934a-bb95505e7f7b", APIVersion:"apps/v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8c9ccf86d-skgf6
I0516 00:01:32.646] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0516 00:01:36.884] Successful
I0516 00:01:36.885] message:Error from server (Conflict): error when applying patch:
I0516 00:01:36.886] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557964889-140\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0516 00:01:36.886] to:
I0516 00:01:36.886] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0516 00:01:36.886] Name: "nginx", Namespace: "namespace-1557964889-140"
I0516 00:01:36.889] Object: &{map["apiVersion":"extensions/v1beta1" "kind":"Deployment" "metadata":map["annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1557964889-140\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "creationTimestamp":"2019-05-16T00:01:32Z" "generation":'\x01' "labels":map["name":"nginx"] "managedFields":[map["apiVersion":"apps/v1" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[] "f:updatedReplicas":map[]]] "manager":"kube-controller-manager" "operation":"Update" "time":"2019-05-16T00:01:32Z"] map["apiVersion":"extensions/v1beta1" "fields":map["f:metadata":map["f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]] "f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:containers":map["k:{\"name\":\"nginx\"}":map[".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[] "f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[]]] "f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[]]]]] "manager":"kubectl" "operation":"Update" "time":"2019-05-16T00:01:32Z"]] "name":"nginx" "namespace":"namespace-1557964889-140" "resourceVersion":"611" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1557964889-140/deployments/nginx" "uid":"348664eb-3e47-4bb2-b928-ae5155ea3ac7"] "spec":map["progressDeadlineSeconds":%!q(int64=+2147483647) "replicas":'\x03' "revisionHistoryLimit":%!q(int64=+2147483647) "selector":map["matchLabels":map["name":"nginx1"]] "strategy":map["rollingUpdate":map["maxSurge":'\x01' "maxUnavailable":'\x01'] "type":"RollingUpdate"] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "imagePullPolicy":"IfNotPresent" "name":"nginx" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File"]] "dnsPolicy":"ClusterFirst" "restartPolicy":"Always" "schedulerName":"default-scheduler" "securityContext":map[] "terminationGracePeriodSeconds":'\x1e']]] "status":map["conditions":[map["lastTransitionTime":"2019-05-16T00:01:32Z" "lastUpdateTime":"2019-05-16T00:01:32Z" "message":"Deployment does not have minimum availability." "reason":"MinimumReplicasUnavailable" "status":"False" "type":"Available"]] "observedGeneration":'\x01' "replicas":'\x03' "unavailableReplicas":'\x03' "updatedReplicas":'\x03']]}
I0516 00:01:36.889] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0516 00:01:36.889] has:Error from server (Conflict)
W0516 00:01:37.575] I0516 00:01:37.574650   51063 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1557964880-12932
W0516 00:01:41.208] E0516 00:01:41.207765   51063 replica_set.go:450] Sync "namespace-1557964889-140/nginx-8c9ccf86d" failed with Operation cannot be fulfilled on replicasets.apps "nginx-8c9ccf86d": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1557964889-140/nginx-8c9ccf86d, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 9fdcb56f-cdbf-420d-934a-bb95505e7f7b, UID in object meta: 
W0516 00:01:41.211] E0516 00:01:41.211335   51063 replica_set.go:450] Sync "namespace-1557964889-140/nginx-8c9ccf86d" failed with replicasets.apps "nginx-8c9ccf86d" not found
I0516 00:01:42.179] deployment.extensions/nginx configured
W0516 00:01:42.279] I0516 00:01:42.184180   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557964889-140", Name:"nginx", UID:"7c5cc9c0-7a8d-42dd-a376-5b8b4a700bb6", APIVersion:"apps/v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-86bb9b4d9f to 3
W0516 00:01:42.280] I0516 00:01:42.188715   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964889-140", Name:"nginx-86bb9b4d9f", UID:"b24540c5-0cb8-4c8f-8af7-b45b4d3c0511", APIVersion:"apps/v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-s8pnw
W0516 00:01:42.280] I0516 00:01:42.193659   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964889-140", Name:"nginx-86bb9b4d9f", UID:"b24540c5-0cb8-4c8f-8af7-b45b4d3c0511", APIVersion:"apps/v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-tnp6q
W0516 00:01:42.281] I0516 00:01:42.194156   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964889-140", Name:"nginx-86bb9b4d9f", UID:"b24540c5-0cb8-4c8f-8af7-b45b4d3c0511", APIVersion:"apps/v1", ResourceVersion:"634", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-bkjvn
I0516 00:01:42.381] Successful
I0516 00:01:42.381] message:        "name": "nginx2"
I0516 00:01:42.381]           "name": "nginx2"
I0516 00:01:42.382] has:"name": "nginx2"
W0516 00:01:46.597] E0516 00:01:46.596903   51063 replica_set.go:450] Sync "namespace-1557964889-140/nginx-86bb9b4d9f" failed with Operation cannot be fulfilled on replicasets.apps "nginx-86bb9b4d9f": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1557964889-140/nginx-86bb9b4d9f, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: b24540c5-0cb8-4c8f-8af7-b45b4d3c0511, UID in object meta: 
W0516 00:01:47.576] I0516 00:01:47.576080   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557964889-140", Name:"nginx", UID:"a946aac0-f13f-4f97-841e-b32d6ebb4c65", APIVersion:"apps/v1", ResourceVersion:"665", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-86bb9b4d9f to 3
W0516 00:01:47.581] I0516 00:01:47.580789   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964889-140", Name:"nginx-86bb9b4d9f", UID:"7f17aa22-0fe3-4b80-bcd6-bc33dce1b2f8", APIVersion:"apps/v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-xr27n
W0516 00:01:47.587] I0516 00:01:47.587217   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964889-140", Name:"nginx-86bb9b4d9f", UID:"7f17aa22-0fe3-4b80-bcd6-bc33dce1b2f8", APIVersion:"apps/v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-d4npt
W0516 00:01:47.588] I0516 00:01:47.587982   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964889-140", Name:"nginx-86bb9b4d9f", UID:"7f17aa22-0fe3-4b80-bcd6-bc33dce1b2f8", APIVersion:"apps/v1", ResourceVersion:"666", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-86bb9b4d9f-45kdt
I0516 00:01:47.689] Successful
I0516 00:01:47.689] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 159 lines ...
I0516 00:01:49.860] +++ [0516 00:01:49] Creating namespace namespace-1557964909-32208
I0516 00:01:49.940] namespace/namespace-1557964909-32208 created
I0516 00:01:50.029] Context "test" modified.
I0516 00:01:50.038] +++ [0516 00:01:50] Testing kubectl get
I0516 00:01:50.145] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:01:50.237] Successful
I0516 00:01:50.238] message:Error from server (NotFound): pods "abc" not found
I0516 00:01:50.238] has:pods "abc" not found
I0516 00:01:50.342] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:01:50.427] Successful
I0516 00:01:50.428] message:Error from server (NotFound): pods "abc" not found
I0516 00:01:50.428] has:pods "abc" not found
I0516 00:01:50.518] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:01:50.607] Successful
I0516 00:01:50.608] message:{
I0516 00:01:50.608]     "apiVersion": "v1",
I0516 00:01:50.608]     "items": [],
... skipping 23 lines ...
I0516 00:01:50.969] has not:No resources found
I0516 00:01:51.061] Successful
I0516 00:01:51.061] message:NAME
I0516 00:01:51.061] has not:No resources found
I0516 00:01:51.154] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:01:51.262] Successful
I0516 00:01:51.262] message:error: the server doesn't have a resource type "foobar"
I0516 00:01:51.262] has not:No resources found
I0516 00:01:51.349] Successful
I0516 00:01:51.350] message:No resources found.
I0516 00:01:51.350] has:No resources found
I0516 00:01:51.439] Successful
I0516 00:01:51.439] message:
I0516 00:01:51.439] has not:No resources found
I0516 00:01:51.528] Successful
I0516 00:01:51.528] message:No resources found.
I0516 00:01:51.528] has:No resources found
I0516 00:01:51.631] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:01:51.727] Successful
I0516 00:01:51.727] message:Error from server (NotFound): pods "abc" not found
I0516 00:01:51.727] has:pods "abc" not found
I0516 00:01:51.729] FAIL!
I0516 00:01:51.729] message:Error from server (NotFound): pods "abc" not found
I0516 00:01:51.729] has not:List
I0516 00:01:51.730] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0516 00:01:51.852] Successful
I0516 00:01:51.852] message:I0516 00:01:51.799763   61778 loader.go:359] Config loaded from file:  /tmp/tmp.hJddMaNPlg/.kube/config
I0516 00:01:51.853] I0516 00:01:51.801107   61778 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 0 milliseconds
I0516 00:01:51.853] I0516 00:01:51.822858   61778 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 888 lines ...
I0516 00:01:57.499] Successful
I0516 00:01:57.499] message:NAME    DATA   AGE
I0516 00:01:57.499] one     0      0s
I0516 00:01:57.500] three   0      0s
I0516 00:01:57.500] two     0      0s
I0516 00:01:57.500] STATUS    REASON          MESSAGE
I0516 00:01:57.500] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 00:01:57.500] has not:watch is only supported on individual resources
I0516 00:01:58.594] Successful
I0516 00:01:58.594] message:STATUS    REASON          MESSAGE
I0516 00:01:58.595] Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 00:01:58.595] has not:watch is only supported on individual resources
I0516 00:01:58.600] +++ [0516 00:01:58] Creating namespace namespace-1557964918-31970
I0516 00:01:58.675] namespace/namespace-1557964918-31970 created
I0516 00:01:58.751] Context "test" modified.
I0516 00:01:58.856] get.sh:157: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:01:59.051] pod/valid-pod created
... skipping 104 lines ...
I0516 00:01:59.160] }
I0516 00:01:59.247] get.sh:162: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:01:59.513] <no value>Successful
I0516 00:01:59.513] message:valid-pod:
I0516 00:01:59.513] has:valid-pod:
I0516 00:01:59.606] Successful
I0516 00:01:59.606] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0516 00:01:59.606] 	template was:
I0516 00:01:59.607] 		{.missing}
I0516 00:01:59.607] 	object given to jsonpath engine was:
I0516 00:01:59.609] 		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2019-05-16T00:01:59Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"apiVersion":"v1", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}, "f:requests":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}, "f:terminationMessagePolicy":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}, "f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}}}, "manager":"kubectl", "operation":"Update", "time":"2019-05-16T00:01:59Z"}}, "name":"valid-pod", "namespace":"namespace-1557964918-31970", "resourceVersion":"707", "selfLink":"/api/v1/namespaces/namespace-1557964918-31970/pods/valid-pod", "uid":"a5d1212e-14f1-4644-9935-ddf67bfb3698"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0516 00:01:59.609] has:missing is not found
I0516 00:01:59.703] Successful
I0516 00:01:59.703] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0516 00:01:59.704] 	template was:
I0516 00:01:59.704] 		{{.missing}}
I0516 00:01:59.704] 	raw data was:
I0516 00:01:59.705] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-05-16T00:01:59Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-05-16T00:01:59Z"}],"name":"valid-pod","namespace":"namespace-1557964918-31970","resourceVersion":"707","selfLink":"/api/v1/namespaces/namespace-1557964918-31970/pods/valid-pod","uid":"a5d1212e-14f1-4644-9935-ddf67bfb3698"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0516 00:01:59.705] 	object given to template engine was:
I0516 00:01:59.706] 		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2019-05-16T00:01:59Z labels:map[name:valid-pod] managedFields:[map[apiVersion:v1 fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[.:map[] f:image:map[] f:imagePullPolicy:map[] f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[.:map[] f:cpu:map[] f:memory:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[] f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[]]] manager:kubectl operation:Update time:2019-05-16T00:01:59Z]] name:valid-pod namespace:namespace-1557964918-31970 resourceVersion:707 selfLink:/api/v1/namespaces/namespace-1557964918-31970/pods/valid-pod uid:a5d1212e-14f1-4644-9935-ddf67bfb3698] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
I0516 00:01:59.706] has:map has no entry for key "missing"
W0516 00:01:59.806] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
I0516 00:02:00.798] Successful
I0516 00:02:00.798] message:NAME        READY   STATUS    RESTARTS   AGE
I0516 00:02:00.798] valid-pod   0/1     Pending   0          0s
I0516 00:02:00.798] STATUS      REASON          MESSAGE
I0516 00:02:00.799] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 00:02:00.799] has:STATUS
I0516 00:02:00.801] Successful
I0516 00:02:00.801] message:NAME        READY   STATUS    RESTARTS   AGE
I0516 00:02:00.801] valid-pod   0/1     Pending   0          0s
I0516 00:02:00.801] STATUS      REASON          MESSAGE
I0516 00:02:00.801] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 00:02:00.801] has:valid-pod
I0516 00:02:01.888] Successful
I0516 00:02:01.888] message:pod/valid-pod
I0516 00:02:01.888] has not:STATUS
I0516 00:02:01.890] Successful
I0516 00:02:01.890] message:pod/valid-pod
... skipping 142 lines ...
I0516 00:02:03.002]   terminationGracePeriodSeconds: 30
I0516 00:02:03.003] status:
I0516 00:02:03.003]   phase: Pending
I0516 00:02:03.003]   qosClass: Guaranteed
I0516 00:02:03.003] has:name: valid-pod
I0516 00:02:03.081] Successful
I0516 00:02:03.082] message:Error from server (NotFound): pods "invalid-pod" not found
I0516 00:02:03.082] has:"invalid-pod" not found
I0516 00:02:03.162] pod "valid-pod" deleted
I0516 00:02:03.265] get.sh:200: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:02:03.466] pod/redis-master created
I0516 00:02:03.469] pod/valid-pod created
I0516 00:02:03.579] Successful
... skipping 283 lines ...
I0516 00:02:09.404] +++ command: run_kubectl_exec_pod_tests
I0516 00:02:09.417] +++ [0516 00:02:09] Creating namespace namespace-1557964929-10517
I0516 00:02:09.499] namespace/namespace-1557964929-10517 created
I0516 00:02:09.586] Context "test" modified.
I0516 00:02:09.595] +++ [0516 00:02:09] Testing kubectl exec POD COMMAND
I0516 00:02:09.699] Successful
I0516 00:02:09.699] message:Error from server (NotFound): pods "abc" not found
I0516 00:02:09.699] has:pods "abc" not found
I0516 00:02:09.902] pod/test-pod created
I0516 00:02:10.025] Successful
I0516 00:02:10.026] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0516 00:02:10.026] has not:pods "test-pod" not found
I0516 00:02:10.027] Successful
I0516 00:02:10.028] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0516 00:02:10.028] has not:pod or type/name must be specified
I0516 00:02:10.112] pod "test-pod" deleted
I0516 00:02:10.135] +++ exit code: 0
I0516 00:02:10.184] Recording: run_kubectl_exec_resource_name_tests
I0516 00:02:10.184] Running command: run_kubectl_exec_resource_name_tests
I0516 00:02:10.208] 
... skipping 2 lines ...
I0516 00:02:10.216] +++ command: run_kubectl_exec_resource_name_tests
I0516 00:02:10.228] +++ [0516 00:02:10] Creating namespace namespace-1557964930-29574
I0516 00:02:10.305] namespace/namespace-1557964930-29574 created
I0516 00:02:10.381] Context "test" modified.
I0516 00:02:10.390] +++ [0516 00:02:10] Testing kubectl exec TYPE/NAME COMMAND
I0516 00:02:10.491] Successful
I0516 00:02:10.491] message:error: the server doesn't have a resource type "foo"
I0516 00:02:10.491] has:error:
I0516 00:02:10.581] Successful
I0516 00:02:10.582] message:Error from server (NotFound): deployments.extensions "bar" not found
I0516 00:02:10.582] has:"bar" not found
I0516 00:02:10.768] pod/test-pod created
I0516 00:02:10.980] replicaset.apps/frontend created
W0516 00:02:11.081] I0516 00:02:10.985989   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964930-29574", Name:"frontend", UID:"507264d0-db38-47e5-91c5-846075e813e4", APIVersion:"apps/v1", ResourceVersion:"824", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hz6wc
W0516 00:02:11.082] I0516 00:02:10.990666   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964930-29574", Name:"frontend", UID:"507264d0-db38-47e5-91c5-846075e813e4", APIVersion:"apps/v1", ResourceVersion:"824", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-w2jzj
W0516 00:02:11.082] I0516 00:02:10.990867   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964930-29574", Name:"frontend", UID:"507264d0-db38-47e5-91c5-846075e813e4", APIVersion:"apps/v1", ResourceVersion:"824", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xptz4
I0516 00:02:11.185] configmap/test-set-env-config created
I0516 00:02:11.287] Successful
I0516 00:02:11.287] message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
I0516 00:02:11.287] has:not implemented
I0516 00:02:11.380] Successful
I0516 00:02:11.380] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0516 00:02:11.381] has not:not found
I0516 00:02:11.382] Successful
I0516 00:02:11.382] message:Error from server (BadRequest): pod test-pod does not have a host assigned
I0516 00:02:11.383] has not:pod or type/name must be specified
I0516 00:02:11.484] Successful
I0516 00:02:11.485] message:Error from server (BadRequest): pod frontend-hz6wc does not have a host assigned
I0516 00:02:11.485] has not:not found
I0516 00:02:11.487] Successful
I0516 00:02:11.487] message:Error from server (BadRequest): pod frontend-hz6wc does not have a host assigned
I0516 00:02:11.487] has not:pod or type/name must be specified
I0516 00:02:11.567] pod "test-pod" deleted
I0516 00:02:11.655] replicaset.extensions "frontend" deleted
I0516 00:02:11.741] configmap "test-set-env-config" deleted
I0516 00:02:11.764] +++ exit code: 0
I0516 00:02:11.803] Recording: run_create_secret_tests
I0516 00:02:11.803] Running command: run_create_secret_tests
I0516 00:02:11.824] 
I0516 00:02:11.827] +++ Running case: test-cmd.run_create_secret_tests 
I0516 00:02:11.830] +++ working dir: /go/src/k8s.io/kubernetes
I0516 00:02:11.832] +++ command: run_create_secret_tests
I0516 00:02:11.927] Successful
I0516 00:02:11.927] message:Error from server (NotFound): secrets "mysecret" not found
I0516 00:02:11.927] has:secrets "mysecret" not found
I0516 00:02:12.089] Successful
I0516 00:02:12.089] message:Error from server (NotFound): secrets "mysecret" not found
I0516 00:02:12.089] has:secrets "mysecret" not found
I0516 00:02:12.091] Successful
I0516 00:02:12.091] message:user-specified
I0516 00:02:12.091] has:user-specified
I0516 00:02:12.165] Successful
I0516 00:02:12.253] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"db914e03-3d81-45d3-8724-e429d1b7e979","resourceVersion":"844","creationTimestamp":"2019-05-16T00:02:12Z"}}
... skipping 164 lines ...
I0516 00:02:15.248] valid-pod   0/1     Pending   0          1s
I0516 00:02:15.248] has:valid-pod
I0516 00:02:16.332] Successful
I0516 00:02:16.332] message:NAME        READY   STATUS    RESTARTS   AGE
I0516 00:02:16.332] valid-pod   0/1     Pending   0          1s
I0516 00:02:16.332] STATUS      REASON          MESSAGE
I0516 00:02:16.332] Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
I0516 00:02:16.333] has:Timeout exceeded while reading body
I0516 00:02:16.420] Successful
I0516 00:02:16.420] message:NAME        READY   STATUS    RESTARTS   AGE
I0516 00:02:16.420] valid-pod   0/1     Pending   0          2s
I0516 00:02:16.420] has:valid-pod
I0516 00:02:16.491] Successful
I0516 00:02:16.491] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0516 00:02:16.492] has:Invalid timeout value
I0516 00:02:16.572] pod "valid-pod" deleted
I0516 00:02:16.596] +++ exit code: 0
I0516 00:02:16.648] Recording: run_crd_tests
I0516 00:02:16.648] Running command: run_crd_tests
I0516 00:02:16.672] 
... skipping 237 lines ...
I0516 00:02:21.444] foo.company.com/test patched
I0516 00:02:21.540] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0516 00:02:21.623] foo.company.com/test patched
I0516 00:02:21.720] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0516 00:02:21.804] foo.company.com/test patched
I0516 00:02:21.901] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0516 00:02:22.066] +++ [0516 00:02:22] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0516 00:02:22.131] {
I0516 00:02:22.132]     "apiVersion": "company.com/v1",
I0516 00:02:22.132]     "kind": "Foo",
I0516 00:02:22.132]     "metadata": {
I0516 00:02:22.132]         "annotations": {
I0516 00:02:22.133]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 318 lines ...
I0516 00:02:39.247] namespace/non-native-resources created
I0516 00:02:39.460] bar.company.com/test created
I0516 00:02:39.595] crd.sh:456: Successful get bars {{len .items}}: 1
I0516 00:02:39.679] namespace "non-native-resources" deleted
I0516 00:02:44.919] crd.sh:459: Successful get bars {{len .items}}: 0
I0516 00:02:45.094] customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0516 00:02:45.194] Error from server (NotFound): namespaces "non-native-resources" not found
I0516 00:02:45.295] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0516 00:02:45.304] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0516 00:02:45.409] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0516 00:02:45.447] +++ exit code: 0
I0516 00:02:45.531] Recording: run_cmd_with_img_tests
I0516 00:02:45.531] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0516 00:02:45.846] I0516 00:02:45.840262   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964965-7595", Name:"test1-7b9c75bcb9", UID:"77c2414b-720d-4bb7-aaf3-7a9a0a5d4faa", APIVersion:"apps/v1", ResourceVersion:"1000", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-7b9c75bcb9-wp6qd
I0516 00:02:45.947] Successful
I0516 00:02:45.947] message:deployment.apps/test1 created
I0516 00:02:45.947] has:deployment.apps/test1 created
I0516 00:02:45.947] deployment.extensions "test1" deleted
I0516 00:02:46.023] Successful
I0516 00:02:46.024] message:error: Invalid image name "InvalidImageName": invalid reference format
I0516 00:02:46.024] has:error: Invalid image name "InvalidImageName": invalid reference format
I0516 00:02:46.040] +++ exit code: 0
I0516 00:02:46.091] +++ [0516 00:02:46] Testing recursive resources
I0516 00:02:46.097] +++ [0516 00:02:46] Creating namespace namespace-1557964966-10142
I0516 00:02:46.172] namespace/namespace-1557964966-10142 created
I0516 00:02:46.242] Context "test" modified.
I0516 00:02:46.339] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:02:46.644] generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:46.647] Successful
I0516 00:02:46.647] message:pod/busybox0 created
I0516 00:02:46.647] pod/busybox1 created
I0516 00:02:46.647] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0516 00:02:46.647] has:error validating data: kind not set
I0516 00:02:46.745] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:46.931] generic-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0516 00:02:46.934] Successful
I0516 00:02:46.934] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:02:46.934] has:Object 'Kind' is missing
I0516 00:02:47.030] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:47.351] generic-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0516 00:02:47.353] Successful
I0516 00:02:47.354] message:pod/busybox0 replaced
I0516 00:02:47.354] pod/busybox1 replaced
I0516 00:02:47.354] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0516 00:02:47.354] has:error validating data: kind not set
I0516 00:02:47.447] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:47.545] Successful
I0516 00:02:47.546] message:Name:         busybox0
I0516 00:02:47.546] Namespace:    namespace-1557964966-10142
I0516 00:02:47.546] Priority:     0
I0516 00:02:47.546] Node:         <none>
... skipping 153 lines ...
I0516 00:02:47.563] has:Object 'Kind' is missing
I0516 00:02:47.648] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:47.841] generic-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0516 00:02:47.844] Successful
I0516 00:02:47.844] message:pod/busybox0 annotated
I0516 00:02:47.844] pod/busybox1 annotated
I0516 00:02:47.844] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:02:47.844] has:Object 'Kind' is missing
I0516 00:02:47.939] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:48.265] generic-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0516 00:02:48.267] Successful
I0516 00:02:48.267] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0516 00:02:48.268] pod/busybox0 configured
I0516 00:02:48.268] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0516 00:02:48.268] pod/busybox1 configured
I0516 00:02:48.268] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0516 00:02:48.268] has:error validating data: kind not set
I0516 00:02:48.359] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:02:48.561] deployment.apps/nginx created
W0516 00:02:48.662] I0516 00:02:48.567135   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557964966-10142", Name:"nginx", UID:"2f4d99d3-cd61-4936-a1f2-0480f99a412e", APIVersion:"apps/v1", ResourceVersion:"1024", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-958dc566b to 3
W0516 00:02:48.662] I0516 00:02:48.572431   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964966-10142", Name:"nginx-958dc566b", UID:"6b706fba-95f4-48f6-987c-55296c8f47a4", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-c2qmm
W0516 00:02:48.663] I0516 00:02:48.576180   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964966-10142", Name:"nginx-958dc566b", UID:"6b706fba-95f4-48f6-987c-55296c8f47a4", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-5w98c
W0516 00:02:48.663] I0516 00:02:48.578037   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964966-10142", Name:"nginx-958dc566b", UID:"6b706fba-95f4-48f6-987c-55296c8f47a4", APIVersion:"apps/v1", ResourceVersion:"1025", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-958dc566b-9rkdh
... skipping 48 lines ...
W0516 00:02:49.163] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0516 00:02:49.264] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:49.342] generic-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:49.345] Successful
I0516 00:02:49.345] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0516 00:02:49.346] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0516 00:02:49.346] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:02:49.346] has:Object 'Kind' is missing
I0516 00:02:49.442] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:49.538] Successful
I0516 00:02:49.539] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:02:49.539] has:busybox0:busybox1:
I0516 00:02:49.541] Successful
I0516 00:02:49.542] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:02:49.542] has:Object 'Kind' is missing
I0516 00:02:49.639] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:49.743] pod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:02:49.847] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0516 00:02:49.850] Successful
I0516 00:02:49.850] message:pod/busybox0 labeled
I0516 00:02:49.850] pod/busybox1 labeled
I0516 00:02:49.850] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:02:49.851] has:Object 'Kind' is missing
I0516 00:02:49.949] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:50.053] pod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
W0516 00:02:50.153] I0516 00:02:49.810397   51063 namespace_controller.go:171] Namespace has been deleted non-native-resources
I0516 00:02:50.254] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0516 00:02:50.254] Successful
I0516 00:02:50.254] message:pod/busybox0 patched
I0516 00:02:50.255] pod/busybox1 patched
I0516 00:02:50.255] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:02:50.255] has:Object 'Kind' is missing
I0516 00:02:50.275] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:50.474] generic-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:02:50.476] Successful
I0516 00:02:50.477] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0516 00:02:50.477] pod "busybox0" force deleted
I0516 00:02:50.477] pod "busybox1" force deleted
I0516 00:02:50.477] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0516 00:02:50.478] has:Object 'Kind' is missing
I0516 00:02:50.574] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:02:50.772] replicationcontroller/busybox0 created
I0516 00:02:50.776] replicationcontroller/busybox1 created
W0516 00:02:50.877] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0516 00:02:50.877] I0516 00:02:50.777094   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557964966-10142", Name:"busybox0", UID:"6a4e544b-8b5e-4c24-a509-cd58456237f3", APIVersion:"v1", ResourceVersion:"1056", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-wj988
W0516 00:02:50.878] I0516 00:02:50.780437   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557964966-10142", Name:"busybox1", UID:"c79d02f1-2879-467c-8378-ec5494c11c44", APIVersion:"v1", ResourceVersion:"1057", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-ggsdl
I0516 00:02:50.978] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:50.990] generic-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:51.082] generic-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0516 00:02:51.175] generic-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0516 00:02:51.355] generic-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0516 00:02:51.448] generic-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0516 00:02:51.451] Successful
I0516 00:02:51.451] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0516 00:02:51.452] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0516 00:02:51.452] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:02:51.452] has:Object 'Kind' is missing
I0516 00:02:51.530] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0516 00:02:51.620] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0516 00:02:51.726] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:51.822] generic-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0516 00:02:51.920] generic-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0516 00:02:52.117] generic-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0516 00:02:52.212] generic-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0516 00:02:52.214] Successful
I0516 00:02:52.215] message:service/busybox0 exposed
I0516 00:02:52.215] service/busybox1 exposed
I0516 00:02:52.215] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:02:52.215] has:Object 'Kind' is missing
I0516 00:02:52.308] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:52.399] generic-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0516 00:02:52.495] generic-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0516 00:02:52.707] generic-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0516 00:02:52.806] generic-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0516 00:02:52.808] Successful
I0516 00:02:52.808] message:replicationcontroller/busybox0 scaled
I0516 00:02:52.808] replicationcontroller/busybox1 scaled
I0516 00:02:52.809] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:02:52.809] has:Object 'Kind' is missing
I0516 00:02:52.912] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:53.104] generic-resources.sh:381: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:02:53.106] Successful
I0516 00:02:53.106] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0516 00:02:53.106] replicationcontroller "busybox0" force deleted
I0516 00:02:53.107] replicationcontroller "busybox1" force deleted
I0516 00:02:53.107] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:02:53.107] has:Object 'Kind' is missing
I0516 00:02:53.201] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:02:53.388] deployment.apps/nginx1-deployment created
I0516 00:02:53.393] deployment.apps/nginx0-deployment created
W0516 00:02:53.494] I0516 00:02:52.591158   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557964966-10142", Name:"busybox0", UID:"6a4e544b-8b5e-4c24-a509-cd58456237f3", APIVersion:"v1", ResourceVersion:"1077", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-pknvv
W0516 00:02:53.494] I0516 00:02:52.601060   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557964966-10142", Name:"busybox1", UID:"c79d02f1-2879-467c-8378-ec5494c11c44", APIVersion:"v1", ResourceVersion:"1081", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-hcbn2
W0516 00:02:53.495] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0516 00:02:53.495] I0516 00:02:53.398184   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557964966-10142", Name:"nginx1-deployment", UID:"c8fc4e05-b06b-4e75-a9ee-ba59980b38c2", APIVersion:"apps/v1", ResourceVersion:"1098", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-67c99bcc6b to 2
W0516 00:02:53.495] I0516 00:02:53.404265   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557964966-10142", Name:"nginx0-deployment", UID:"95e0e261-6d93-4d64-a48b-0433093de61e", APIVersion:"apps/v1", ResourceVersion:"1099", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-5886cf98fc to 2
W0516 00:02:53.496] I0516 00:02:53.406411   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964966-10142", Name:"nginx1-deployment-67c99bcc6b", UID:"7f1a1bdd-768c-403c-8d10-8a96781cc362", APIVersion:"apps/v1", ResourceVersion:"1100", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-67c99bcc6b-qhpzq
W0516 00:02:53.496] I0516 00:02:53.408542   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964966-10142", Name:"nginx0-deployment-5886cf98fc", UID:"6d057130-895e-4ed7-9ea5-8d6e775177d7", APIVersion:"apps/v1", ResourceVersion:"1101", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-5886cf98fc-s4f6k
W0516 00:02:53.496] I0516 00:02:53.411211   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964966-10142", Name:"nginx1-deployment-67c99bcc6b", UID:"7f1a1bdd-768c-403c-8d10-8a96781cc362", APIVersion:"apps/v1", ResourceVersion:"1100", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-67c99bcc6b-vtrv8
W0516 00:02:53.496] I0516 00:02:53.412831   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557964966-10142", Name:"nginx0-deployment-5886cf98fc", UID:"6d057130-895e-4ed7-9ea5-8d6e775177d7", APIVersion:"apps/v1", ResourceVersion:"1101", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-5886cf98fc-v864j
I0516 00:02:53.597] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0516 00:02:53.618] generic-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0516 00:02:53.831] generic-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0516 00:02:53.833] Successful
I0516 00:02:53.833] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0516 00:02:53.834] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0516 00:02:53.834] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 00:02:53.834] has:Object 'Kind' is missing
I0516 00:02:53.932] deployment.apps/nginx1-deployment paused
I0516 00:02:53.941] deployment.apps/nginx0-deployment paused
I0516 00:02:54.056] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0516 00:02:54.058] Successful
I0516 00:02:54.058] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0516 00:02:54.406] 1         <none>
I0516 00:02:54.406] 
I0516 00:02:54.406] deployment.apps/nginx0-deployment 
I0516 00:02:54.406] REVISION  CHANGE-CAUSE
I0516 00:02:54.406] 1         <none>
I0516 00:02:54.406] 
I0516 00:02:54.407] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 00:02:54.407] has:nginx0-deployment
I0516 00:02:54.407] Successful
I0516 00:02:54.408] message:deployment.apps/nginx1-deployment 
I0516 00:02:54.408] REVISION  CHANGE-CAUSE
I0516 00:02:54.408] 1         <none>
I0516 00:02:54.408] 
I0516 00:02:54.408] deployment.apps/nginx0-deployment 
I0516 00:02:54.408] REVISION  CHANGE-CAUSE
I0516 00:02:54.409] 1         <none>
I0516 00:02:54.409] 
I0516 00:02:54.409] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 00:02:54.409] has:nginx1-deployment
I0516 00:02:54.410] Successful
I0516 00:02:54.410] message:deployment.apps/nginx1-deployment 
I0516 00:02:54.410] REVISION  CHANGE-CAUSE
I0516 00:02:54.410] 1         <none>
I0516 00:02:54.410] 
I0516 00:02:54.411] deployment.apps/nginx0-deployment 
I0516 00:02:54.411] REVISION  CHANGE-CAUSE
I0516 00:02:54.411] 1         <none>
I0516 00:02:54.411] 
I0516 00:02:54.412] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 00:02:54.412] has:Object 'Kind' is missing
I0516 00:02:54.494] deployment.apps "nginx1-deployment" force deleted
I0516 00:02:54.500] deployment.apps "nginx0-deployment" force deleted
W0516 00:02:54.601] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0516 00:02:54.602] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0516 00:02:55.607] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:02:55.795] replicationcontroller/busybox0 created
I0516 00:02:55.799] replicationcontroller/busybox1 created
W0516 00:02:55.900] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0516 00:02:55.901] I0516 00:02:55.800289   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557964966-10142", Name:"busybox0", UID:"f5c9f28e-37ce-49a7-87a6-7b9f8120f29a", APIVersion:"v1", ResourceVersion:"1147", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-cl66r
W0516 00:02:55.901] I0516 00:02:55.803845   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557964966-10142", Name:"busybox1", UID:"3db22508-16ce-40b5-acdc-7312751e85ed", APIVersion:"v1", ResourceVersion:"1148", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-5lztp
I0516 00:02:56.002] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0516 00:02:56.010] Successful
I0516 00:02:56.011] message:no rollbacker has been implemented for "ReplicationController"
I0516 00:02:56.011] no rollbacker has been implemented for "ReplicationController"
... skipping 3 lines ...
I0516 00:02:56.013] message:no rollbacker has been implemented for "ReplicationController"
I0516 00:02:56.013] no rollbacker has been implemented for "ReplicationController"
I0516 00:02:56.014] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:02:56.014] has:Object 'Kind' is missing
I0516 00:02:56.111] Successful
I0516 00:02:56.112] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:02:56.112] error: replicationcontrollers "busybox0" pausing is not supported
I0516 00:02:56.112] error: replicationcontrollers "busybox1" pausing is not supported
I0516 00:02:56.112] has:Object 'Kind' is missing
I0516 00:02:56.114] Successful
I0516 00:02:56.115] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:02:56.115] error: replicationcontrollers "busybox0" pausing is not supported
I0516 00:02:56.115] error: replicationcontrollers "busybox1" pausing is not supported
I0516 00:02:56.116] has:replicationcontrollers "busybox0" pausing is not supported
I0516 00:02:56.117] Successful
I0516 00:02:56.117] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:02:56.117] error: replicationcontrollers "busybox0" pausing is not supported
I0516 00:02:56.117] error: replicationcontrollers "busybox1" pausing is not supported
I0516 00:02:56.118] has:replicationcontrollers "busybox1" pausing is not supported
I0516 00:02:56.214] Successful
I0516 00:02:56.215] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:02:56.215] error: replicationcontrollers "busybox0" resuming is not supported
I0516 00:02:56.215] error: replicationcontrollers "busybox1" resuming is not supported
I0516 00:02:56.215] has:Object 'Kind' is missing
I0516 00:02:56.216] Successful
I0516 00:02:56.217] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:02:56.217] error: replicationcontrollers "busybox0" resuming is not supported
I0516 00:02:56.217] error: replicationcontrollers "busybox1" resuming is not supported
I0516 00:02:56.218] has:replicationcontrollers "busybox0" resuming is not supported
I0516 00:02:56.219] Successful
I0516 00:02:56.220] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:02:56.220] error: replicationcontrollers "busybox0" resuming is not supported
I0516 00:02:56.220] error: replicationcontrollers "busybox1" resuming is not supported
I0516 00:02:56.220] has:replicationcontrollers "busybox0" resuming is not supported
I0516 00:02:56.296] replicationcontroller "busybox0" force deleted
I0516 00:02:56.301] replicationcontroller "busybox1" force deleted
W0516 00:02:56.401] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0516 00:02:56.402] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0516 00:02:57.310] Recording: run_namespace_tests
I0516 00:02:57.310] Running command: run_namespace_tests
I0516 00:02:57.335] 
I0516 00:02:57.337] +++ Running case: test-cmd.run_namespace_tests 
I0516 00:02:57.340] +++ working dir: /go/src/k8s.io/kubernetes
I0516 00:02:57.343] +++ command: run_namespace_tests
... skipping 4 lines ...
W0516 00:03:02.247] I0516 00:03:02.246962   51063 controller_utils.go:1029] Waiting for caches to sync for garbage collector controller
W0516 00:03:02.348] I0516 00:03:02.347356   51063 controller_utils.go:1036] Caches are synced for garbage collector controller
W0516 00:03:02.361] I0516 00:03:02.360968   51063 controller_utils.go:1029] Waiting for caches to sync for resource quota controller
W0516 00:03:02.462] I0516 00:03:02.461347   51063 controller_utils.go:1036] Caches are synced for resource quota controller
I0516 00:03:02.713] namespace/my-namespace condition met
I0516 00:03:02.811] Successful
I0516 00:03:02.811] message:Error from server (NotFound): namespaces "my-namespace" not found
I0516 00:03:02.811] has: not found
I0516 00:03:02.885] namespace/my-namespace created
I0516 00:03:02.991] core.sh:1330: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0516 00:03:03.187] Successful
I0516 00:03:03.187] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0516 00:03:03.187] namespace "kube-node-lease" deleted
... skipping 30 lines ...
I0516 00:03:03.191] namespace "namespace-1557964933-28565" deleted
I0516 00:03:03.191] namespace "namespace-1557964934-25652" deleted
I0516 00:03:03.191] namespace "namespace-1557964936-31362" deleted
I0516 00:03:03.191] namespace "namespace-1557964938-9793" deleted
I0516 00:03:03.192] namespace "namespace-1557964965-7595" deleted
I0516 00:03:03.192] namespace "namespace-1557964966-10142" deleted
I0516 00:03:03.192] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0516 00:03:03.192] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0516 00:03:03.192] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0516 00:03:03.192] has:warning: deleting cluster-scoped resources
I0516 00:03:03.192] Successful
I0516 00:03:03.192] message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
I0516 00:03:03.193] namespace "kube-node-lease" deleted
I0516 00:03:03.193] namespace "my-namespace" deleted
I0516 00:03:03.193] namespace "namespace-1557964830-9387" deleted
... skipping 28 lines ...
I0516 00:03:03.196] namespace "namespace-1557964933-28565" deleted
I0516 00:03:03.196] namespace "namespace-1557964934-25652" deleted
I0516 00:03:03.196] namespace "namespace-1557964936-31362" deleted
I0516 00:03:03.196] namespace "namespace-1557964938-9793" deleted
I0516 00:03:03.196] namespace "namespace-1557964965-7595" deleted
I0516 00:03:03.197] namespace "namespace-1557964966-10142" deleted
I0516 00:03:03.197] Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
I0516 00:03:03.197] Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
I0516 00:03:03.197] Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
I0516 00:03:03.197] has:namespace "my-namespace" deleted
I0516 00:03:03.303] core.sh:1342: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0516 00:03:03.381] namespace/other created
I0516 00:03:03.479] core.sh:1346: Successful get namespaces/other {{.metadata.name}}: other
I0516 00:03:03.574] core.sh:1350: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:03:03.762] pod/valid-pod created
I0516 00:03:03.875] core.sh:1354: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:03:03.973] core.sh:1356: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:03:04.062] Successful
I0516 00:03:04.063] message:error: a resource cannot be retrieved by name across all namespaces
I0516 00:03:04.063] has:a resource cannot be retrieved by name across all namespaces
I0516 00:03:04.161] core.sh:1363: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0516 00:03:04.245] pod "valid-pod" force deleted
I0516 00:03:04.350] core.sh:1367: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:03:04.427] namespace "other" deleted
W0516 00:03:04.528] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 151 lines ...
I0516 00:03:24.937] +++ command: run_client_config_tests
I0516 00:03:24.948] +++ [0516 00:03:24] Creating namespace namespace-1557965004-14840
I0516 00:03:25.024] namespace/namespace-1557965004-14840 created
I0516 00:03:25.098] Context "test" modified.
I0516 00:03:25.107] +++ [0516 00:03:25] Testing client config
I0516 00:03:25.180] Successful
I0516 00:03:25.180] message:error: stat missing: no such file or directory
I0516 00:03:25.180] has:missing: no such file or directory
I0516 00:03:25.249] Successful
I0516 00:03:25.250] message:error: stat missing: no such file or directory
I0516 00:03:25.250] has:missing: no such file or directory
I0516 00:03:25.321] Successful
I0516 00:03:25.322] message:error: stat missing: no such file or directory
I0516 00:03:25.322] has:missing: no such file or directory
I0516 00:03:25.396] Successful
I0516 00:03:25.396] message:Error in configuration: context was not found for specified context: missing-context
I0516 00:03:25.396] has:context was not found for specified context: missing-context
I0516 00:03:25.470] Successful
I0516 00:03:25.471] message:error: no server found for cluster "missing-cluster"
I0516 00:03:25.471] has:no server found for cluster "missing-cluster"
I0516 00:03:25.546] Successful
I0516 00:03:25.546] message:error: auth info "missing-user" does not exist
I0516 00:03:25.546] has:auth info "missing-user" does not exist
I0516 00:03:25.697] Successful
I0516 00:03:25.697] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0516 00:03:25.698] has:Error loading config file
I0516 00:03:25.774] Successful
I0516 00:03:25.774] message:error: stat missing-config: no such file or directory
I0516 00:03:25.774] has:no such file or directory
I0516 00:03:25.789] +++ exit code: 0
I0516 00:03:25.832] Recording: run_service_accounts_tests
I0516 00:03:25.833] Running command: run_service_accounts_tests
I0516 00:03:25.855] 
I0516 00:03:25.857] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 34 lines ...
I0516 00:03:32.679] Labels:                        run=pi
I0516 00:03:32.679] Annotations:                   <none>
I0516 00:03:32.679] Schedule:                      59 23 31 2 *
I0516 00:03:32.679] Concurrency Policy:            Allow
I0516 00:03:32.679] Suspend:                       False
I0516 00:03:32.679] Successful Job History Limit:  3
I0516 00:03:32.679] Failed Job History Limit:      1
I0516 00:03:32.680] Starting Deadline Seconds:     <unset>
I0516 00:03:32.680] Selector:                      <unset>
I0516 00:03:32.680] Parallelism:                   <unset>
I0516 00:03:32.680] Completions:                   <unset>
I0516 00:03:32.680] Pod Template:
I0516 00:03:32.680]   Labels:  run=pi
... skipping 32 lines ...
I0516 00:03:33.237]                 run=pi
I0516 00:03:33.237] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0516 00:03:33.237] Controlled By:  CronJob/pi
I0516 00:03:33.237] Parallelism:    1
I0516 00:03:33.237] Completions:    1
I0516 00:03:33.238] Start Time:     Thu, 16 May 2019 00:03:32 +0000
I0516 00:03:33.238] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0516 00:03:33.238] Pod Template:
I0516 00:03:33.238]   Labels:  controller-uid=1388fab7-fecf-4678-ac27-e702f1c23423
I0516 00:03:33.238]            job-name=test-job
I0516 00:03:33.238]            run=pi
I0516 00:03:33.238]   Containers:
I0516 00:03:33.238]    pi:
... skipping 389 lines ...
I0516 00:03:43.318]   selector:
I0516 00:03:43.319]     role: padawan
I0516 00:03:43.319]   sessionAffinity: None
I0516 00:03:43.319]   type: ClusterIP
I0516 00:03:43.319] status:
I0516 00:03:43.319]   loadBalancer: {}
W0516 00:03:43.419] error: you must specify resources by --filename when --local is set.
W0516 00:03:43.420] Example resource specifications include:
W0516 00:03:43.420]    '-f rsrc.yaml'
W0516 00:03:43.420]    '--filename=rsrc.json'
I0516 00:03:43.520] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0516 00:03:43.664] core.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0516 00:03:43.746] service "redis-master" deleted
... skipping 108 lines ...
I0516 00:03:51.302] apps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0516 00:03:51.399] apps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0516 00:03:51.504] daemonset.extensions/bind rolled back
I0516 00:03:51.611] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0516 00:03:51.708] apps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0516 00:03:51.815] Successful
I0516 00:03:51.816] message:error: unable to find specified revision 1000000 in history
I0516 00:03:51.816] has:unable to find specified revision
I0516 00:03:51.915] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0516 00:03:52.025] apps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0516 00:03:52.136] daemonset.extensions/bind rolled back
W0516 00:03:52.241] E0516 00:03:51.528314   51063 daemon_controller.go:302] namespace-1557965029-22133/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1557965029-22133", SelfLink:"/apis/apps/v1/namespaces/namespace-1557965029-22133/daemonsets/bind", UID:"f39d8da3-3d40-4297-adcf-1e1b78329ed1", ResourceVersion:"1631", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63693561829, loc:(*time.Location)(0x72a78a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1557965029-22133\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000ee1a00), Fields:(*v1.Fields)(0xc002d8de60)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001374340), Fields:(*v1.Fields)(0xc002d8def8)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc0013745c0), Fields:(*v1.Fields)(0xc002d8df48)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001374780), Fields:(*v1.Fields)(0xc002d8df78)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001374a40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001ea7ab8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002e38c60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001374ac0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002d8dfd0)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001ea7b30)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0516 00:03:52.245] E0516 00:03:52.164017   51063 daemon_controller.go:302] namespace-1557965029-22133/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1557965029-22133", SelfLink:"/apis/apps/v1/namespaces/namespace-1557965029-22133/daemonsets/bind", UID:"f39d8da3-3d40-4297-adcf-1e1b78329ed1", ResourceVersion:"1634", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63693561829, loc:(*time.Location)(0x72a78a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1557965029-22133\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00185ed60), Fields:(*v1.Fields)(0xc002e5acb8)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00185f4e0), Fields:(*v1.Fields)(0xc002e5ad50)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00185f5c0), Fields:(*v1.Fields)(0xc002e5ad80)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00185f720), Fields:(*v1.Fields)(0xc002e5add0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00185faa0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002da9d78), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002ccc600), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc00185fae0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc002e5ae30)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002da9df0)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
I0516 00:03:52.345] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0516 00:03:52.353] apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0516 00:03:52.454] apps.sh:95: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0516 00:03:52.533] daemonset.apps "bind" deleted
I0516 00:03:52.560] +++ exit code: 0
I0516 00:03:52.606] Recording: run_rc_tests
... skipping 24 lines ...
I0516 00:03:53.883] Namespace:    namespace-1557965032-11059
I0516 00:03:53.883] Selector:     app=guestbook,tier=frontend
I0516 00:03:53.883] Labels:       app=guestbook
I0516 00:03:53.883]               tier=frontend
I0516 00:03:53.883] Annotations:  <none>
I0516 00:03:53.884] Replicas:     3 current / 3 desired
I0516 00:03:53.884] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:03:53.884] Pod Template:
I0516 00:03:53.884]   Labels:  app=guestbook
I0516 00:03:53.884]            tier=frontend
I0516 00:03:53.885]   Containers:
I0516 00:03:53.885]    php-redis:
I0516 00:03:53.885]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0516 00:03:54.007] Namespace:    namespace-1557965032-11059
I0516 00:03:54.007] Selector:     app=guestbook,tier=frontend
I0516 00:03:54.007] Labels:       app=guestbook
I0516 00:03:54.007]               tier=frontend
I0516 00:03:54.007] Annotations:  <none>
I0516 00:03:54.007] Replicas:     3 current / 3 desired
I0516 00:03:54.007] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:03:54.008] Pod Template:
I0516 00:03:54.008]   Labels:  app=guestbook
I0516 00:03:54.008]            tier=frontend
I0516 00:03:54.008]   Containers:
I0516 00:03:54.008]    php-redis:
I0516 00:03:54.008]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0516 00:03:54.123] Namespace:    namespace-1557965032-11059
I0516 00:03:54.123] Selector:     app=guestbook,tier=frontend
I0516 00:03:54.123] Labels:       app=guestbook
I0516 00:03:54.123]               tier=frontend
I0516 00:03:54.123] Annotations:  <none>
I0516 00:03:54.123] Replicas:     3 current / 3 desired
I0516 00:03:54.124] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:03:54.124] Pod Template:
I0516 00:03:54.124]   Labels:  app=guestbook
I0516 00:03:54.124]            tier=frontend
I0516 00:03:54.124]   Containers:
I0516 00:03:54.124]    php-redis:
I0516 00:03:54.124]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0516 00:03:54.248] Namespace:    namespace-1557965032-11059
I0516 00:03:54.249] Selector:     app=guestbook,tier=frontend
I0516 00:03:54.249] Labels:       app=guestbook
I0516 00:03:54.249]               tier=frontend
I0516 00:03:54.249] Annotations:  <none>
I0516 00:03:54.249] Replicas:     3 current / 3 desired
I0516 00:03:54.249] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:03:54.249] Pod Template:
I0516 00:03:54.249]   Labels:  app=guestbook
I0516 00:03:54.249]            tier=frontend
I0516 00:03:54.250]   Containers:
I0516 00:03:54.250]    php-redis:
I0516 00:03:54.250]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0516 00:03:54.408] Namespace:    namespace-1557965032-11059
I0516 00:03:54.408] Selector:     app=guestbook,tier=frontend
I0516 00:03:54.408] Labels:       app=guestbook
I0516 00:03:54.408]               tier=frontend
I0516 00:03:54.409] Annotations:  <none>
I0516 00:03:54.409] Replicas:     3 current / 3 desired
I0516 00:03:54.409] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:03:54.409] Pod Template:
I0516 00:03:54.409]   Labels:  app=guestbook
I0516 00:03:54.410]            tier=frontend
I0516 00:03:54.410]   Containers:
I0516 00:03:54.410]    php-redis:
I0516 00:03:54.410]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0516 00:03:54.524] Namespace:    namespace-1557965032-11059
I0516 00:03:54.524] Selector:     app=guestbook,tier=frontend
I0516 00:03:54.524] Labels:       app=guestbook
I0516 00:03:54.524]               tier=frontend
I0516 00:03:54.524] Annotations:  <none>
I0516 00:03:54.525] Replicas:     3 current / 3 desired
I0516 00:03:54.525] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:03:54.525] Pod Template:
I0516 00:03:54.525]   Labels:  app=guestbook
I0516 00:03:54.525]            tier=frontend
I0516 00:03:54.525]   Containers:
I0516 00:03:54.525]    php-redis:
I0516 00:03:54.526]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0516 00:03:54.638] Namespace:    namespace-1557965032-11059
I0516 00:03:54.639] Selector:     app=guestbook,tier=frontend
I0516 00:03:54.639] Labels:       app=guestbook
I0516 00:03:54.639]               tier=frontend
I0516 00:03:54.639] Annotations:  <none>
I0516 00:03:54.639] Replicas:     3 current / 3 desired
I0516 00:03:54.640] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:03:54.640] Pod Template:
I0516 00:03:54.640]   Labels:  app=guestbook
I0516 00:03:54.640]            tier=frontend
I0516 00:03:54.640]   Containers:
I0516 00:03:54.640]    php-redis:
I0516 00:03:54.640]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0516 00:03:54.763] Namespace:    namespace-1557965032-11059
I0516 00:03:54.764] Selector:     app=guestbook,tier=frontend
I0516 00:03:54.764] Labels:       app=guestbook
I0516 00:03:54.764]               tier=frontend
I0516 00:03:54.764] Annotations:  <none>
I0516 00:03:54.764] Replicas:     3 current / 3 desired
I0516 00:03:54.764] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:03:54.764] Pod Template:
I0516 00:03:54.765]   Labels:  app=guestbook
I0516 00:03:54.765]            tier=frontend
I0516 00:03:54.765]   Containers:
I0516 00:03:54.765]    php-redis:
I0516 00:03:54.765]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0516 00:03:55.535] replicationcontroller/frontend scaled
I0516 00:03:55.636] core.sh:1087: Successful get rc frontend {{.spec.replicas}}: 3
I0516 00:03:55.734] core.sh:1091: Successful get rc frontend {{.spec.replicas}}: 3
I0516 00:03:55.823] replicationcontroller/frontend scaled
I0516 00:03:55.929] core.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I0516 00:03:56.018] replicationcontroller "frontend" deleted
W0516 00:03:56.119] error: Expected replicas to be 3, was 2
W0516 00:03:56.119] I0516 00:03:55.539884   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557965032-11059", Name:"frontend", UID:"3cfb776f-8498-432a-b91c-7b3499f7669c", APIVersion:"v1", ResourceVersion:"1675", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cdhbl
W0516 00:03:56.120] I0516 00:03:55.827042   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557965032-11059", Name:"frontend", UID:"3cfb776f-8498-432a-b91c-7b3499f7669c", APIVersion:"v1", ResourceVersion:"1680", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-cdhbl
W0516 00:03:56.220] I0516 00:03:56.219748   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557965032-11059", Name:"redis-master", UID:"fbaf2ff7-fd26-484e-af2c-a0d31f204af4", APIVersion:"v1", ResourceVersion:"1691", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-mvvhz
I0516 00:03:56.321] replicationcontroller/redis-master created
I0516 00:03:56.423] replicationcontroller/redis-slave created
W0516 00:03:56.524] I0516 00:03:56.428322   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557965032-11059", Name:"redis-slave", UID:"882911fa-80a3-4332-9789-d60be9426121", APIVersion:"v1", ResourceVersion:"1696", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-hj6tm
... skipping 36 lines ...
I0516 00:03:58.278] service "expose-test-deployment" deleted
I0516 00:03:58.389] Successful
I0516 00:03:58.390] message:service/expose-test-deployment exposed
I0516 00:03:58.390] has:service/expose-test-deployment exposed
I0516 00:03:58.474] service "expose-test-deployment" deleted
I0516 00:03:58.570] Successful
I0516 00:03:58.570] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0516 00:03:58.570] See 'kubectl expose -h' for help and examples
I0516 00:03:58.570] has:invalid deployment: no selectors
I0516 00:03:58.662] Successful
I0516 00:03:58.663] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0516 00:03:58.663] See 'kubectl expose -h' for help and examples
I0516 00:03:58.663] has:invalid deployment: no selectors
I0516 00:03:58.845] deployment.apps/nginx-deployment created
W0516 00:03:58.946] I0516 00:03:58.852021   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557965032-11059", Name:"nginx-deployment", UID:"29edc714-602d-4fd1-bdb4-10e1b94eed34", APIVersion:"apps/v1", ResourceVersion:"1796", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5cb597d4f to 3
W0516 00:03:58.947] I0516 00:03:58.856322   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965032-11059", Name:"nginx-deployment-5cb597d4f", UID:"78b877d1-8369-4278-b625-5c34ef4002a2", APIVersion:"apps/v1", ResourceVersion:"1797", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5cb597d4f-z2x4r
W0516 00:03:58.948] I0516 00:03:58.859723   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965032-11059", Name:"nginx-deployment-5cb597d4f", UID:"78b877d1-8369-4278-b625-5c34ef4002a2", APIVersion:"apps/v1", ResourceVersion:"1797", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5cb597d4f-xl9hw
... skipping 23 lines ...
I0516 00:04:00.984] service "frontend" deleted
I0516 00:04:00.991] service "frontend-2" deleted
I0516 00:04:00.997] service "frontend-3" deleted
I0516 00:04:01.003] service "frontend-4" deleted
I0516 00:04:01.010] service "frontend-5" deleted
I0516 00:04:01.113] Successful
I0516 00:04:01.113] message:error: cannot expose a Node
I0516 00:04:01.114] has:cannot expose
I0516 00:04:01.208] Successful
I0516 00:04:01.208] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0516 00:04:01.208] has:metadata.name: Invalid value
I0516 00:04:01.307] Successful
I0516 00:04:01.308] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0516 00:04:03.494] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0516 00:04:03.592] core.sh:1259: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0516 00:04:03.673] horizontalpodautoscaler.autoscaling "frontend" deleted
I0516 00:04:03.762] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0516 00:04:03.863] core.sh:1263: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0516 00:04:03.945] horizontalpodautoscaler.autoscaling "frontend" deleted
W0516 00:04:04.045] Error: required flag(s) "max" not set
W0516 00:04:04.046] 
W0516 00:04:04.046] 
W0516 00:04:04.046] Examples:
W0516 00:04:04.046]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0516 00:04:04.046]   kubectl autoscale deployment foo --min=2 --max=10
W0516 00:04:04.046]   
... skipping 55 lines ...
I0516 00:04:04.326]           limits:
I0516 00:04:04.326]             cpu: 300m
I0516 00:04:04.326]           requests:
I0516 00:04:04.326]             cpu: 300m
I0516 00:04:04.326]       terminationGracePeriodSeconds: 0
I0516 00:04:04.326] status: {}
W0516 00:04:04.427] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0516 00:04:04.608] deployment.apps/nginx-deployment-resources created
W0516 00:04:04.709] I0516 00:04:04.614854   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557965032-11059", Name:"nginx-deployment-resources", UID:"cbf24466-55fc-4691-9d71-43077e7f4b6a", APIVersion:"apps/v1", ResourceVersion:"1937", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-865b6bb7c6 to 3
W0516 00:04:04.710] I0516 00:04:04.620111   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965032-11059", Name:"nginx-deployment-resources-865b6bb7c6", UID:"296177c1-f2bc-49bf-ba00-4e536b00e1a1", APIVersion:"apps/v1", ResourceVersion:"1938", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-865b6bb7c6-v4smj
W0516 00:04:04.710] I0516 00:04:04.624200   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965032-11059", Name:"nginx-deployment-resources-865b6bb7c6", UID:"296177c1-f2bc-49bf-ba00-4e536b00e1a1", APIVersion:"apps/v1", ResourceVersion:"1938", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-865b6bb7c6-98cxk
W0516 00:04:04.711] I0516 00:04:04.625201   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965032-11059", Name:"nginx-deployment-resources-865b6bb7c6", UID:"296177c1-f2bc-49bf-ba00-4e536b00e1a1", APIVersion:"apps/v1", ResourceVersion:"1938", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-865b6bb7c6-7d8kp
I0516 00:04:04.811] core.sh:1278: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 2 lines ...
I0516 00:04:05.045] deployment.extensions/nginx-deployment-resources resource requirements updated
W0516 00:04:05.146] I0516 00:04:05.051328   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557965032-11059", Name:"nginx-deployment-resources", UID:"cbf24466-55fc-4691-9d71-43077e7f4b6a", APIVersion:"apps/v1", ResourceVersion:"1951", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-69b4c96c9b to 1
W0516 00:04:05.146] I0516 00:04:05.056415   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965032-11059", Name:"nginx-deployment-resources-69b4c96c9b", UID:"378af642-7a54-4ec9-b150-c7ca371ea73e", APIVersion:"apps/v1", ResourceVersion:"1952", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-69b4c96c9b-2m5k5
I0516 00:04:05.247] core.sh:1283: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0516 00:04:05.257] core.sh:1284: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0516 00:04:05.455] deployment.extensions/nginx-deployment-resources resource requirements updated
W0516 00:04:05.555] error: unable to find container named redis
W0516 00:04:05.556] I0516 00:04:05.475814   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557965032-11059", Name:"nginx-deployment-resources", UID:"cbf24466-55fc-4691-9d71-43077e7f4b6a", APIVersion:"apps/v1", ResourceVersion:"1962", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-865b6bb7c6 to 2
W0516 00:04:05.556] I0516 00:04:05.481308   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965032-11059", Name:"nginx-deployment-resources-865b6bb7c6", UID:"296177c1-f2bc-49bf-ba00-4e536b00e1a1", APIVersion:"apps/v1", ResourceVersion:"1966", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-865b6bb7c6-v4smj
W0516 00:04:05.557] I0516 00:04:05.498964   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557965032-11059", Name:"nginx-deployment-resources", UID:"cbf24466-55fc-4691-9d71-43077e7f4b6a", APIVersion:"apps/v1", ResourceVersion:"1965", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-7bb7d84c58 to 1
W0516 00:04:05.557] I0516 00:04:05.504161   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965032-11059", Name:"nginx-deployment-resources-7bb7d84c58", UID:"52dfc28f-1923-4285-a1cb-e7250eb26a86", APIVersion:"apps/v1", ResourceVersion:"1973", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-7bb7d84c58-dqkrl
I0516 00:04:05.657] core.sh:1289: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0516 00:04:05.666] core.sh:1290: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
... skipping 211 lines ...
I0516 00:04:06.205]     status: "True"
I0516 00:04:06.205]     type: Progressing
I0516 00:04:06.205]   observedGeneration: 4
I0516 00:04:06.205]   replicas: 4
I0516 00:04:06.205]   unavailableReplicas: 4
I0516 00:04:06.205]   updatedReplicas: 1
W0516 00:04:06.306] error: you must specify resources by --filename when --local is set.
W0516 00:04:06.306] Example resource specifications include:
W0516 00:04:06.306]    '-f rsrc.yaml'
W0516 00:04:06.307]    '--filename=rsrc.json'
I0516 00:04:06.407] core.sh:1299: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0516 00:04:06.448] core.sh:1300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0516 00:04:06.543] core.sh:1301: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0516 00:04:08.108]                 pod-template-hash=75c7695cbd
I0516 00:04:08.109] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0516 00:04:08.109]                 deployment.kubernetes.io/max-replicas: 2
I0516 00:04:08.109]                 deployment.kubernetes.io/revision: 1
I0516 00:04:08.109] Controlled By:  Deployment/test-nginx-apps
I0516 00:04:08.109] Replicas:       1 current / 1 desired
I0516 00:04:08.110] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:08.110] Pod Template:
I0516 00:04:08.110]   Labels:  app=test-nginx-apps
I0516 00:04:08.110]            pod-template-hash=75c7695cbd
I0516 00:04:08.110]   Containers:
I0516 00:04:08.110]    nginx:
I0516 00:04:08.111]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 90 lines ...
I0516 00:04:13.365]     Image:	k8s.gcr.io/nginx:test-cmd
I0516 00:04:13.459] apps.sh:296: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0516 00:04:13.573] deployment.extensions/nginx rolled back
I0516 00:04:14.679] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0516 00:04:14.887] apps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0516 00:04:14.999] deployment.extensions/nginx rolled back
W0516 00:04:15.100] error: unable to find specified revision 1000000 in history
I0516 00:04:16.107] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0516 00:04:16.202] deployment.extensions/nginx paused
W0516 00:04:16.311] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
W0516 00:04:16.406] error: deployments.extensions "nginx" can't restart paused deployment (run rollout resume first)
I0516 00:04:16.509] deployment.extensions/nginx resumed
I0516 00:04:16.638] deployment.extensions/nginx rolled back
I0516 00:04:16.854]     deployment.kubernetes.io/revision-history: 1,3
W0516 00:04:17.042] error: desired revision (3) is different from the running revision (5)
I0516 00:04:17.145] deployment.extensions/nginx restarted
W0516 00:04:17.246] I0516 00:04:17.167944   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557965046-19164", Name:"nginx", UID:"8f466da9-a633-4758-aa54-ce8e8307c817", APIVersion:"apps/v1", ResourceVersion:"2183", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-958dc566b to 2
W0516 00:04:17.246] I0516 00:04:17.173364   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965046-19164", Name:"nginx-958dc566b", UID:"0b92d1b5-1e60-4240-aafc-c98fe0d4334a", APIVersion:"apps/v1", ResourceVersion:"2187", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-958dc566b-7k2wh
W0516 00:04:17.247] I0516 00:04:17.189286   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557965046-19164", Name:"nginx", UID:"8f466da9-a633-4758-aa54-ce8e8307c817", APIVersion:"apps/v1", ResourceVersion:"2186", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-758f9b454c to 1
W0516 00:04:17.247] I0516 00:04:17.193050   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965046-19164", Name:"nginx-758f9b454c", UID:"2272991e-48bd-40b6-9516-25b9bb0ea622", APIVersion:"apps/v1", ResourceVersion:"2194", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-758f9b454c-dgkqn
I0516 00:04:18.359] Successful
... skipping 151 lines ...
I0516 00:04:20.202] deployment.apps/nginx-deployment image updated
I0516 00:04:20.308] apps.sh:355: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0516 00:04:20.401] apps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0516 00:04:20.576] apps.sh:359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0516 00:04:20.670] apps.sh:360: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0516 00:04:20.768] deployment.extensions/nginx-deployment image updated
W0516 00:04:20.869] error: unable to find container named "redis"
W0516 00:04:20.869] I0516 00:04:20.789196   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557965046-19164", Name:"nginx-deployment", UID:"1eb5828d-4e87-44f7-aacf-75f6b68bfba5", APIVersion:"apps/v1", ResourceVersion:"2271", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-64f55cb875 to 0
W0516 00:04:20.870] I0516 00:04:20.793966   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965046-19164", Name:"nginx-deployment-64f55cb875", UID:"f5de5a0a-bdd0-4b36-ad22-dd5875831a35", APIVersion:"apps/v1", ResourceVersion:"2275", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-64f55cb875-w86l2
W0516 00:04:20.870] I0516 00:04:20.807100   51063 event.go:258] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1557965046-19164", Name:"nginx-deployment", UID:"1eb5828d-4e87-44f7-aacf-75f6b68bfba5", APIVersion:"apps/v1", ResourceVersion:"2274", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-78c586c467 to 1
W0516 00:04:20.870] I0516 00:04:20.813624   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965046-19164", Name:"nginx-deployment-78c586c467", UID:"9cc4becd-7a8b-436a-a9dc-e14f146de913", APIVersion:"apps/v1", ResourceVersion:"2281", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-78c586c467-k5j9b
I0516 00:04:20.971] apps.sh:363: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0516 00:04:20.980] apps.sh:364: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 37 lines ...
W0516 00:04:23.418] I0516 00:04:23.417887   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965046-19164", Name:"nginx-deployment-5dfd5c49d4", UID:"26524df2-1aa0-4f72-9d0e-96b079171219", APIVersion:"apps/v1", ResourceVersion:"2401", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5dfd5c49d4-qztkq
I0516 00:04:23.519] deployment.extensions/nginx-deployment env updated
I0516 00:04:23.519] deployment.extensions/nginx-deployment env updated
I0516 00:04:23.520] deployment.extensions/nginx-deployment env updated
I0516 00:04:23.558] deployment.extensions/nginx-deployment env updated
I0516 00:04:23.658] deployment.extensions "nginx-deployment" deleted
W0516 00:04:23.758] E0516 00:04:23.712404   51063 replica_set.go:450] Sync "namespace-1557965046-19164/nginx-deployment-5dfd5c49d4" failed with replicasets.apps "nginx-deployment-5dfd5c49d4" not found
I0516 00:04:23.859] configmap "test-set-env-config" deleted
I0516 00:04:23.859] secret "test-set-env-secret" deleted
I0516 00:04:23.877] +++ exit code: 0
I0516 00:04:23.962] Recording: run_rs_tests
I0516 00:04:23.962] Running command: run_rs_tests
I0516 00:04:23.987] 
... skipping 17 lines ...
W0516 00:04:25.051] I0516 00:04:24.955153   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965064-17412", Name:"frontend-no-cascade", UID:"bdf10188-32db-4b34-9504-8cb10953c939", APIVersion:"apps/v1", ResourceVersion:"2451", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-dc5tr
W0516 00:04:25.051] I0516 00:04:24.960536   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965064-17412", Name:"frontend-no-cascade", UID:"bdf10188-32db-4b34-9504-8cb10953c939", APIVersion:"apps/v1", ResourceVersion:"2451", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-kjm9k
W0516 00:04:25.052] I0516 00:04:24.960955   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1557965064-17412", Name:"frontend-no-cascade", UID:"bdf10188-32db-4b34-9504-8cb10953c939", APIVersion:"apps/v1", ResourceVersion:"2451", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-qpjnz
I0516 00:04:25.152] apps.sh:526: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0516 00:04:25.153] +++ [0516 00:04:25] Deleting rs
I0516 00:04:25.154] replicaset.extensions "frontend-no-cascade" deleted
W0516 00:04:25.255] E0516 00:04:25.178597   51063 replica_set.go:450] Sync "namespace-1557965064-17412/frontend-no-cascade" failed with Operation cannot be fulfilled on replicasets.apps "frontend-no-cascade": StorageError: invalid object, Code: 4, Key: /registry/replicasets/namespace-1557965064-17412/frontend-no-cascade, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: bdf10188-32db-4b34-9504-8cb10953c939, UID in object meta: 
I0516 00:04:25.356] apps.sh:530: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:04:25.367] apps.sh:532: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
I0516 00:04:25.451] pod "frontend-no-cascade-dc5tr" deleted
I0516 00:04:25.457] pod "frontend-no-cascade-kjm9k" deleted
I0516 00:04:25.462] pod "frontend-no-cascade-qpjnz" deleted
I0516 00:04:25.562] apps.sh:535: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 8 lines ...
I0516 00:04:26.102] Namespace:    namespace-1557965064-17412
I0516 00:04:26.102] Selector:     app=guestbook,tier=frontend
I0516 00:04:26.102] Labels:       app=guestbook
I0516 00:04:26.102]               tier=frontend
I0516 00:04:26.102] Annotations:  <none>
I0516 00:04:26.103] Replicas:     3 current / 3 desired
I0516 00:04:26.103] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:26.103] Pod Template:
I0516 00:04:26.103]   Labels:  app=guestbook
I0516 00:04:26.103]            tier=frontend
I0516 00:04:26.104]   Containers:
I0516 00:04:26.104]    php-redis:
I0516 00:04:26.104]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0516 00:04:26.211] Namespace:    namespace-1557965064-17412
I0516 00:04:26.211] Selector:     app=guestbook,tier=frontend
I0516 00:04:26.211] Labels:       app=guestbook
I0516 00:04:26.211]               tier=frontend
I0516 00:04:26.211] Annotations:  <none>
I0516 00:04:26.212] Replicas:     3 current / 3 desired
I0516 00:04:26.212] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:26.212] Pod Template:
I0516 00:04:26.212]   Labels:  app=guestbook
I0516 00:04:26.212]            tier=frontend
I0516 00:04:26.212]   Containers:
I0516 00:04:26.213]    php-redis:
I0516 00:04:26.213]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0516 00:04:26.319] Namespace:    namespace-1557965064-17412
I0516 00:04:26.320] Selector:     app=guestbook,tier=frontend
I0516 00:04:26.320] Labels:       app=guestbook
I0516 00:04:26.320]               tier=frontend
I0516 00:04:26.320] Annotations:  <none>
I0516 00:04:26.320] Replicas:     3 current / 3 desired
I0516 00:04:26.320] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:26.321] Pod Template:
I0516 00:04:26.321]   Labels:  app=guestbook
I0516 00:04:26.321]            tier=frontend
I0516 00:04:26.321]   Containers:
I0516 00:04:26.321]    php-redis:
I0516 00:04:26.322]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0516 00:04:26.429] Namespace:    namespace-1557965064-17412
I0516 00:04:26.429] Selector:     app=guestbook,tier=frontend
I0516 00:04:26.429] Labels:       app=guestbook
I0516 00:04:26.429]               tier=frontend
I0516 00:04:26.429] Annotations:  <none>
I0516 00:04:26.430] Replicas:     3 current / 3 desired
I0516 00:04:26.430] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:26.430] Pod Template:
I0516 00:04:26.430]   Labels:  app=guestbook
I0516 00:04:26.430]            tier=frontend
I0516 00:04:26.430]   Containers:
I0516 00:04:26.430]    php-redis:
I0516 00:04:26.431]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0516 00:04:26.563] Namespace:    namespace-1557965064-17412
I0516 00:04:26.563] Selector:     app=guestbook,tier=frontend
I0516 00:04:26.563] Labels:       app=guestbook
I0516 00:04:26.563]               tier=frontend
I0516 00:04:26.564] Annotations:  <none>
I0516 00:04:26.564] Replicas:     3 current / 3 desired
I0516 00:04:26.564] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:26.564] Pod Template:
I0516 00:04:26.564]   Labels:  app=guestbook
I0516 00:04:26.564]            tier=frontend
I0516 00:04:26.565]   Containers:
I0516 00:04:26.565]    php-redis:
I0516 00:04:26.565]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0516 00:04:26.671] Namespace:    namespace-1557965064-17412
I0516 00:04:26.671] Selector:     app=guestbook,tier=frontend
I0516 00:04:26.672] Labels:       app=guestbook
I0516 00:04:26.672]               tier=frontend
I0516 00:04:26.672] Annotations:  <none>
I0516 00:04:26.672] Replicas:     3 current / 3 desired
I0516 00:04:26.672] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:26.672] Pod Template:
I0516 00:04:26.673]   Labels:  app=guestbook
I0516 00:04:26.673]            tier=frontend
I0516 00:04:26.673]   Containers:
I0516 00:04:26.673]    php-redis:
I0516 00:04:26.673]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0516 00:04:26.779] Namespace:    namespace-1557965064-17412
I0516 00:04:26.779] Selector:     app=guestbook,tier=frontend
I0516 00:04:26.779] Labels:       app=guestbook
I0516 00:04:26.779]               tier=frontend
I0516 00:04:26.780] Annotations:  <none>
I0516 00:04:26.780] Replicas:     3 current / 3 desired
I0516 00:04:26.780] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:26.780] Pod Template:
I0516 00:04:26.780]   Labels:  app=guestbook
I0516 00:04:26.780]            tier=frontend
I0516 00:04:26.781]   Containers:
I0516 00:04:26.781]    php-redis:
I0516 00:04:26.781]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0516 00:04:26.886] Namespace:    namespace-1557965064-17412
I0516 00:04:26.887] Selector:     app=guestbook,tier=frontend
I0516 00:04:26.887] Labels:       app=guestbook
I0516 00:04:26.887]               tier=frontend
I0516 00:04:26.887] Annotations:  <none>
I0516 00:04:26.887] Replicas:     3 current / 3 desired
I0516 00:04:26.887] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:26.887] Pod Template:
I0516 00:04:26.887]   Labels:  app=guestbook
I0516 00:04:26.888]            tier=frontend
I0516 00:04:26.888]   Containers:
I0516 00:04:26.888]    php-redis:
I0516 00:04:26.888]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 181 lines ...
I0516 00:04:32.375] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0516 00:04:32.449] apps.sh:651: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0516 00:04:32.527] horizontalpodautoscaler.autoscaling "frontend" deleted
I0516 00:04:32.612] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0516 00:04:32.703] apps.sh:655: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0516 00:04:32.782] horizontalpodautoscaler.autoscaling "frontend" deleted
W0516 00:04:32.882] Error: required flag(s) "max" not set
W0516 00:04:32.882] 
W0516 00:04:32.882] 
W0516 00:04:32.883] Examples:
W0516 00:04:32.883]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0516 00:04:32.883]   kubectl autoscale deployment foo --min=2 --max=10
W0516 00:04:32.883]   
... skipping 89 lines ...
I0516 00:04:36.089] apps.sh:439: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0516 00:04:36.185] apps.sh:440: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0516 00:04:36.290] statefulset.apps/nginx rolled back
I0516 00:04:36.394] apps.sh:443: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0516 00:04:36.488] apps.sh:444: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0516 00:04:36.596] Successful
I0516 00:04:36.597] message:error: unable to find specified revision 1000000 in history
I0516 00:04:36.597] has:unable to find specified revision
I0516 00:04:36.699] apps.sh:448: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0516 00:04:36.800] apps.sh:449: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0516 00:04:36.916] statefulset.apps/nginx rolled back
I0516 00:04:37.027] apps.sh:452: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0516 00:04:37.123] apps.sh:453: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0516 00:04:39.138] Name:         mock
I0516 00:04:39.138] Namespace:    namespace-1557965078-2696
I0516 00:04:39.138] Selector:     app=mock
I0516 00:04:39.138] Labels:       app=mock
I0516 00:04:39.138] Annotations:  <none>
I0516 00:04:39.138] Replicas:     1 current / 1 desired
I0516 00:04:39.138] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:39.138] Pod Template:
I0516 00:04:39.139]   Labels:  app=mock
I0516 00:04:39.139]   Containers:
I0516 00:04:39.139]    mock-container:
I0516 00:04:39.139]     Image:        k8s.gcr.io/pause:2.0
I0516 00:04:39.139]     Port:         9949/TCP
... skipping 56 lines ...
I0516 00:04:41.628] Name:         mock
I0516 00:04:41.628] Namespace:    namespace-1557965078-2696
I0516 00:04:41.628] Selector:     app=mock
I0516 00:04:41.629] Labels:       app=mock
I0516 00:04:41.629] Annotations:  <none>
I0516 00:04:41.629] Replicas:     1 current / 1 desired
I0516 00:04:41.629] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:41.629] Pod Template:
I0516 00:04:41.630]   Labels:  app=mock
I0516 00:04:41.630]   Containers:
I0516 00:04:41.630]    mock-container:
I0516 00:04:41.630]     Image:        k8s.gcr.io/pause:2.0
I0516 00:04:41.630]     Port:         9949/TCP
... skipping 56 lines ...
I0516 00:04:44.060] Name:         mock
I0516 00:04:44.060] Namespace:    namespace-1557965078-2696
I0516 00:04:44.060] Selector:     app=mock
I0516 00:04:44.060] Labels:       app=mock
I0516 00:04:44.060] Annotations:  <none>
I0516 00:04:44.060] Replicas:     1 current / 1 desired
I0516 00:04:44.060] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:44.060] Pod Template:
I0516 00:04:44.060]   Labels:  app=mock
I0516 00:04:44.061]   Containers:
I0516 00:04:44.061]    mock-container:
I0516 00:04:44.061]     Image:        k8s.gcr.io/pause:2.0
I0516 00:04:44.061]     Port:         9949/TCP
... skipping 41 lines ...
I0516 00:04:46.364] Namespace:    namespace-1557965078-2696
I0516 00:04:46.364] Selector:     app=mock
I0516 00:04:46.364] Labels:       app=mock
I0516 00:04:46.365]               status=replaced
I0516 00:04:46.365] Annotations:  <none>
I0516 00:04:46.365] Replicas:     1 current / 1 desired
I0516 00:04:46.365] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:46.365] Pod Template:
I0516 00:04:46.365]   Labels:  app=mock
I0516 00:04:46.365]   Containers:
I0516 00:04:46.365]    mock-container:
I0516 00:04:46.365]     Image:        k8s.gcr.io/pause:2.0
I0516 00:04:46.366]     Port:         9949/TCP
... skipping 11 lines ...
I0516 00:04:46.372] Namespace:    namespace-1557965078-2696
I0516 00:04:46.373] Selector:     app=mock2
I0516 00:04:46.373] Labels:       app=mock2
I0516 00:04:46.373]               status=replaced
I0516 00:04:46.373] Annotations:  <none>
I0516 00:04:46.373] Replicas:     1 current / 1 desired
I0516 00:04:46.373] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0516 00:04:46.373] Pod Template:
I0516 00:04:46.373]   Labels:  app=mock2
I0516 00:04:46.373]   Containers:
I0516 00:04:46.374]    mock-container:
I0516 00:04:46.374]     Image:        k8s.gcr.io/pause:2.0
I0516 00:04:46.374]     Port:         9949/TCP
... skipping 110 lines ...
I0516 00:04:52.302] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0516 00:04:52.383] persistentvolume "pv0001" deleted
I0516 00:04:52.594] persistentvolume/pv0002 created
I0516 00:04:52.707] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0516 00:04:52.785] persistentvolume "pv0002" deleted
I0516 00:04:52.990] persistentvolume/pv0003 created
W0516 00:04:53.090] E0516 00:04:52.993669   51063 pv_protection_controller.go:117] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
I0516 00:04:53.191] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0516 00:04:53.191] persistentvolume "pv0003" deleted
I0516 00:04:53.296] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0516 00:04:53.494] persistentvolume/pv0001 created
I0516 00:04:53.612] storage.sh:45: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0516 00:04:53.710] Successful
... skipping 495 lines ...
I0516 00:04:58.807] yes
I0516 00:04:58.808] has:the server doesn't have a resource type
I0516 00:04:58.892] Successful
I0516 00:04:58.927] message:yes
I0516 00:04:58.927] has:yes
I0516 00:04:58.973] Successful
I0516 00:04:58.978] message:error: --subresource can not be used with NonResourceURL
I0516 00:04:58.978] has:subresource can not be used with NonResourceURL
I0516 00:04:59.063] Successful
I0516 00:04:59.151] Successful
I0516 00:04:59.152] message:yes
I0516 00:04:59.152] 0
I0516 00:04:59.152] has:0
... skipping 27 lines ...
I0516 00:04:59.812] role.rbac.authorization.k8s.io/testing-R reconciled
I0516 00:04:59.919] legacy-script.sh:801: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0516 00:05:00.015] legacy-script.sh:802: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0516 00:05:00.116] legacy-script.sh:803: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0516 00:05:00.229] legacy-script.sh:804: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0516 00:05:00.319] Successful
I0516 00:05:00.320] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0516 00:05:00.320] has:only rbac.authorization.k8s.io/v1 is supported
I0516 00:05:00.410] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0516 00:05:00.415] role.rbac.authorization.k8s.io "testing-R" deleted
I0516 00:05:00.424] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0516 00:05:00.433] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0516 00:05:00.446] Recording: run_retrieve_multiple_tests
... skipping 45 lines ...
I0516 00:05:01.762] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0516 00:05:01.765] +++ working dir: /go/src/k8s.io/kubernetes
I0516 00:05:01.768] +++ command: run_kubectl_explain_tests
I0516 00:05:01.778] +++ [0516 00:05:01] Testing kubectl(v1:explain)
W0516 00:05:01.879] I0516 00:05:01.618861   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557965100-21717", Name:"cassandra", UID:"b7108bb8-be5a-4f2a-8277-3f0e44cfb9e8", APIVersion:"v1", ResourceVersion:"3020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-4wfkr
W0516 00:05:01.879] I0516 00:05:01.637057   51063 event.go:258] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1557965100-21717", Name:"cassandra", UID:"b7108bb8-be5a-4f2a-8277-3f0e44cfb9e8", APIVersion:"v1", ResourceVersion:"3020", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-gx7d4
W0516 00:05:01.880] E0516 00:05:01.642121   51063 replica_set.go:450] Sync "namespace-1557965100-21717/cassandra" failed with replicationcontrollers "cassandra" not found
I0516 00:05:01.987] KIND:     Pod
I0516 00:05:01.988] VERSION:  v1
I0516 00:05:01.988] 
I0516 00:05:01.988] DESCRIPTION:
I0516 00:05:01.988]      Pod is a collection of containers that can run on a host. This resource is
I0516 00:05:01.988]      created by clients and scheduled onto hosts.
... skipping 977 lines ...
I0516 00:05:30.100] message:node/127.0.0.1 already uncordoned (dry run)
I0516 00:05:30.100] has:already uncordoned
I0516 00:05:30.195] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0516 00:05:30.286] node/127.0.0.1 labeled
I0516 00:05:30.392] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0516 00:05:30.465] Successful
I0516 00:05:30.465] message:error: cannot specify both a node name and a --selector option
I0516 00:05:30.465] See 'kubectl drain -h' for help and examples
I0516 00:05:30.465] has:cannot specify both a node name
I0516 00:05:30.539] Successful
I0516 00:05:30.539] message:error: USAGE: cordon NODE [flags]
I0516 00:05:30.539] See 'kubectl cordon -h' for help and examples
I0516 00:05:30.540] has:error\: USAGE\: cordon NODE
I0516 00:05:30.618] node/127.0.0.1 already uncordoned
I0516 00:05:30.703] Successful
I0516 00:05:30.703] message:error: You must provide one or more resources by argument or filename.
I0516 00:05:30.703] Example resource specifications include:
I0516 00:05:30.704]    '-f rsrc.yaml'
I0516 00:05:30.704]    '--filename=rsrc.json'
I0516 00:05:30.704]    '<resource> <name>'
I0516 00:05:30.704]    '<resource>'
I0516 00:05:30.704] has:must provide one or more resources
... skipping 15 lines ...
I0516 00:05:31.172] Successful
I0516 00:05:31.172] message:The following compatible plugins are available:
I0516 00:05:31.172] 
I0516 00:05:31.172] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0516 00:05:31.172]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0516 00:05:31.172] 
I0516 00:05:31.173] error: one plugin warning was found
I0516 00:05:31.173] has:kubectl-version overwrites existing command: "kubectl version"
I0516 00:05:31.251] Successful
I0516 00:05:31.251] message:The following compatible plugins are available:
I0516 00:05:31.251] 
I0516 00:05:31.251] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0516 00:05:31.251] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0516 00:05:31.252]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0516 00:05:31.252] 
I0516 00:05:31.252] error: one plugin warning was found
I0516 00:05:31.252] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0516 00:05:31.328] Successful
I0516 00:05:31.328] message:The following compatible plugins are available:
I0516 00:05:31.328] 
I0516 00:05:31.328] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0516 00:05:31.328] has:plugins are available
I0516 00:05:31.403] Successful
I0516 00:05:31.404] message:Unable read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
I0516 00:05:31.404] error: unable to find any kubectl plugins in your PATH
I0516 00:05:31.404] has:unable to find any kubectl plugins in your PATH
I0516 00:05:31.477] Successful
I0516 00:05:31.478] message:I am plugin foo
I0516 00:05:31.478] has:plugin foo
I0516 00:05:31.555] Successful
I0516 00:05:31.556] message:Client Version: version.Info{Major:"1", Minor:"16+", GitVersion:"v1.16.0-alpha.0.61+16e2d5fc3764db", GitCommit:"16e2d5fc3764db426f4611304dd897ff308bb76a", GitTreeState:"clean", BuildDate:"2019-05-15T23:58:15Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0516 00:05:31.648] 
I0516 00:05:31.650] +++ Running case: test-cmd.run_impersonation_tests 
I0516 00:05:31.653] +++ working dir: /go/src/k8s.io/kubernetes
I0516 00:05:31.655] +++ command: run_impersonation_tests
I0516 00:05:31.666] +++ [0516 00:05:31] Testing impersonation
I0516 00:05:31.741] Successful
I0516 00:05:31.742] message:error: requesting groups or user-extra for  without impersonating a user
I0516 00:05:31.742] has:without impersonating a user
I0516 00:05:31.957] certificatesigningrequest.certificates.k8s.io/foo created
I0516 00:05:32.071] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0516 00:05:32.164] authorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0516 00:05:32.247] certificatesigningrequest.certificates.k8s.io "foo" deleted
I0516 00:05:32.444] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 54 lines ...
W0516 00:05:35.656] I0516 00:05:35.653853   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.656] I0516 00:05:35.653870   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.656] I0516 00:05:35.654133   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.656] I0516 00:05:35.654145   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.656] I0516 00:05:35.654163   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.656] I0516 00:05:35.654150   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.657] W0516 00:05:35.654330   47716 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:05:35.657] I0516 00:05:35.654673   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.657] I0516 00:05:35.654712   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.657] I0516 00:05:35.655020   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.657] I0516 00:05:35.655032   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.657] I0516 00:05:35.655053   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.658] I0516 00:05:35.655059   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 2 lines ...
W0516 00:05:35.658] I0516 00:05:35.655389   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: []
W0516 00:05:35.658] I0516 00:05:35.655419   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.658] I0516 00:05:35.655438   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.659] I0516 00:05:35.655451   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.659] I0516 00:05:35.655472   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.659] I0516 00:05:35.655478   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.659] W0516 00:05:35.655628   47716 clientconn.go:1251] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0516 00:05:35.659] I0516 00:05:35.655749   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.659] I0516 00:05:35.655773   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.660] I0516 00:05:35.655810   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.660] I0516 00:05:35.655821   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.660] I0516 00:05:35.655853   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.660] I0516 00:05:35.655864   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.660] I0516 00:05:35.655880   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.660] I0516 00:05:35.655899   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.661] E0516 00:05:35.655911   47716 controller.go:179] rpc error: code = Unavailable desc = transport is closing
W0516 00:05:35.661] I0516 00:05:35.655920   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.661] I0516 00:05:35.655929   47716 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0516 00:05:35.756] + make test-integration
I0516 00:05:35.856] No resources found
I0516 00:05:35.857] No resources found
I0516 00:05:35.857] +++ [0516 00:05:35] TESTS PASSED
... skipping 7 lines ...
I0516 00:05:41.101] +++ [0516 00:05:41] On try 2, etcd: : http://127.0.0.1:2379
I0516 00:05:41.112] {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
I0516 00:05:41.115] +++ [0516 00:05:41] Running integration test cases
I0516 00:05:45.996] Running tests for APIVersion: v1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1beta1,apps/v1,apps/v1beta1,apps/v1beta2,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,autoscaling/v2beta2,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,coordination.k8s.io/v1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,networking.k8s.io/v1beta1,node.k8s.io/v1alpha1,node.k8s.io/v1beta1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,scheduling.k8s.io/v1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,
I0516 00:05:46.041] +++ [0516 00:05:46] Running tests without code coverage
W0516 00:07:02.052] # k8s.io/kubernetes/test/e2e/scheduling
W0516 00:07:02.052] test/e2e/scheduling/nvidia-gpus.go:201:2: undefined: By
W0516 00:07:02.053] test/e2e/scheduling/nvidia-gpus.go:205:2: undefined: Expect
W0516 00:07:02.053] test/e2e/scheduling/nvidia-gpus.go:205:20: undefined: HaveOccurred
W0516 00:07:02.053] test/e2e/scheduling/nvidia-gpus.go:209:2: undefined: Expect
W0516 00:07:02.053] test/e2e/scheduling/nvidia-gpus.go:209:20: undefined: HaveOccurred
W0516 00:07:02.053] test/e2e/scheduling/nvidia-gpus.go:212:2: undefined: Expect
W0516 00:07:02.054] test/e2e/scheduling/nvidia-gpus.go:212:20: undefined: HaveOccurred
W0516 00:07:02.054] test/e2e/scheduling/nvidia-gpus.go:214:2: undefined: Expect
W0516 00:07:02.054] test/e2e/scheduling/nvidia-gpus.go:214:20: undefined: HaveOccurred
W0516 00:07:02.054] test/e2e/scheduling/nvidia-gpus.go:216:2: undefined: By
W0516 00:07:02.054] test/e2e/scheduling/nvidia-gpus.go:216:2: too many errors
I0516 00:19:25.383] ok  	k8s.io/kubernetes/test/integration/apimachinery	276.856s
I0516 00:19:25.384] ok  	k8s.io/kubernetes/test/integration/apiserver	82.834s
I0516 00:19:25.385] ok  	k8s.io/kubernetes/test/integration/apiserver/admissionwebhook	67.323s
I0516 00:19:25.385] ok  	k8s.io/kubernetes/test/integration/apiserver/apply	54.447s
I0516 00:19:25.385] FAIL	k8s.io/kubernetes/test/integration/auth [build failed]
I0516 00:19:25.385] ok  	k8s.io/kubernetes/test/integration/client	53.989s
I0516 00:19:25.385] ok  	k8s.io/kubernetes/test/integration/configmap	4.159s
I0516 00:19:25.385] ok  	k8s.io/kubernetes/test/integration/cronjob	34.842s
I0516 00:19:25.385] ok  	k8s.io/kubernetes/test/integration/daemonset	532.353s
I0516 00:19:25.385] ok  	k8s.io/kubernetes/test/integration/defaulttolerationseconds	3.680s
I0516 00:19:25.386] ok  	k8s.io/kubernetes/test/integration/deployment	208.283s
... skipping 25 lines ...
I0516 00:19:25.389] ok  	k8s.io/kubernetes/test/integration/storageclasses	3.740s
I0516 00:19:25.389] ok  	k8s.io/kubernetes/test/integration/tls	8.316s
I0516 00:19:25.390] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	10.016s
I0516 00:19:25.390] ok  	k8s.io/kubernetes/test/integration/volume	93.376s
I0516 00:19:25.390] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	196.697s
I0516 00:19:41.336] +++ [0516 00:19:41] Saved JUnit XML test report to /workspace/artifacts/junit_d431ed5f68ae4ddf888439fb96b687a923412204_20190516-000546.xml
I0516 00:19:41.340] Makefile:185: recipe for target 'test' failed
I0516 00:19:41.351] +++ [0516 00:19:41] Cleaning up etcd
W0516 00:19:41.452] make[1]: *** [test] Error 1
W0516 00:19:41.452] !!! [0516 00:19:41] Call tree:
W0516 00:19:41.452] !!! [0516 00:19:41]  1: hack/make-rules/test-integration.sh:102 runTests(...)
I0516 00:19:41.890] +++ [0516 00:19:41] Integration test cleanup complete
I0516 00:19:41.891] Makefile:204: recipe for target 'test-integration' failed
W0516 00:19:41.992] make: *** [test-integration] Error 1
W0516 00:19:46.396] Traceback (most recent call last):
W0516 00:19:46.396]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0516 00:19:46.397]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0516 00:19:46.397]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0516 00:19:46.397]     check(*cmd)
W0516 00:19:46.397]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0516 00:19:46.397]     subprocess.check_call(cmd)
W0516 00:19:46.397]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0516 00:19:46.397]     raise CalledProcessError(retcode, cmd)
W0516 00:19:46.398] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.14-v20190318-2ac98e338', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0516 00:19:46.403] Command failed
I0516 00:19:46.403] process 674 exited with code 1 after 28.4m
E0516 00:19:46.404] FAIL: pull-kubernetes-integration
I0516 00:19:46.404] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0516 00:19:46.998] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0516 00:19:47.046] process 112059 exited with code 0 after 0.0m
I0516 00:19:47.046] Call:  gcloud config get-value account
I0516 00:19:47.346] process 112071 exited with code 0 after 0.0m
I0516 00:19:47.346] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0516 00:19:47.347] Upload result and artifacts...
I0516 00:19:47.347] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/76401/pull-kubernetes-integration/1128809838948651009
I0516 00:19:47.347] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/76401/pull-kubernetes-integration/1128809838948651009/artifacts
W0516 00:19:48.571] CommandException: One or more URLs matched no objects.
E0516 00:19:48.700] Command failed
I0516 00:19:48.700] process 112083 exited with code 1 after 0.0m
W0516 00:19:48.701] Remote dir gs://kubernetes-jenkins/pr-logs/pull/76401/pull-kubernetes-integration/1128809838948651009/artifacts not exist yet
I0516 00:19:48.701] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/76401/pull-kubernetes-integration/1128809838948651009/artifacts
I0516 00:19:52.842] process 112225 exited with code 0 after 0.1m
W0516 00:19:52.842] metadata path /workspace/_artifacts/metadata.json does not exist
W0516 00:19:52.843] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...