This job view page is being replaced by Spyglass soon. Check out the new job view.
PRtnozicka: #50102 Task 3: Until, backed by retry watcher
ResultFAILURE
Tests 1 failed / 620 succeeded
Started2019-02-11 15:54
Elapsed28m46s
Revision
Buildergke-prow-containerd-pool-99179761-mrdv
Refs master:836db5c9
67350:0fdc93c1
pod05958b77-2e15-11e9-8746-0a580a6c0714
infra-commit0e19c7061
pod05958b77-2e15-11e9-8746-0a580a6c0714
repok8s.io/kubernetes
repo-commit78d08c6ea0d156e023b8a4ca8b89f973784d94d1
repos{u'k8s.io/kubernetes': u'master:836db5c90e5706b0418091eb52f26ca3a01a7eee,67350:0fdc93c1b6eecd29ec025f8a2b9544004b136acb'}

Test Failures


k8s.io/kubernetes/test/integration/apimachinery [build failed] 0.00s

k8s.io/kubernetes/test/integration/apimachinery [build failed]
from junit_642613dbe8fbf016c1770a7007e34bb12666c617_20190211-161042.xml

Show 620 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 314 lines ...
W0211 16:04:38.872] I0211 16:04:38.871344   54270 serving.go:311] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0211 16:04:38.872] I0211 16:04:38.871437   54270 server.go:561] external host was not specified, using 172.17.0.2
W0211 16:04:38.873] W0211 16:04:38.871452   54270 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0211 16:04:38.873] I0211 16:04:38.871724   54270 server.go:146] Version: v1.14.0-alpha.2.523+78d08c6ea0d156
W0211 16:04:39.347] I0211 16:04:39.346097   54270 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 16:04:39.347] I0211 16:04:39.346164   54270 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 16:04:39.347] E0211 16:04:39.346663   54270 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 16:04:39.348] E0211 16:04:39.346699   54270 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 16:04:39.348] E0211 16:04:39.346753   54270 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 16:04:39.348] E0211 16:04:39.346784   54270 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 16:04:39.348] E0211 16:04:39.346814   54270 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 16:04:39.348] E0211 16:04:39.346864   54270 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 16:04:39.349] I0211 16:04:39.346886   54270 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 16:04:39.349] I0211 16:04:39.346891   54270 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 16:04:39.349] I0211 16:04:39.348520   54270 clientconn.go:551] parsed scheme: ""
W0211 16:04:39.349] I0211 16:04:39.348543   54270 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 16:04:39.349] I0211 16:04:39.348619   54270 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 16:04:39.350] I0211 16:04:39.348698   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 335 lines ...
W0211 16:04:39.704] W0211 16:04:39.703490   54270 genericapiserver.go:330] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0211 16:04:40.345] I0211 16:04:40.345019   54270 clientconn.go:551] parsed scheme: ""
W0211 16:04:40.346] I0211 16:04:40.345091   54270 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 16:04:40.346] I0211 16:04:40.345224   54270 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 16:04:40.346] I0211 16:04:40.345304   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:04:40.346] I0211 16:04:40.345848   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:04:40.620] E0211 16:04:40.619362   54270 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 16:04:40.620] E0211 16:04:40.619442   54270 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 16:04:40.621] E0211 16:04:40.619505   54270 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 16:04:40.621] E0211 16:04:40.619550   54270 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 16:04:40.621] E0211 16:04:40.619593   54270 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 16:04:40.621] E0211 16:04:40.619642   54270 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0211 16:04:40.621] I0211 16:04:40.619680   54270 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0211 16:04:40.621] I0211 16:04:40.619688   54270 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0211 16:04:40.622] I0211 16:04:40.621203   54270 clientconn.go:551] parsed scheme: ""
W0211 16:04:40.622] I0211 16:04:40.621238   54270 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 16:04:40.622] I0211 16:04:40.621293   54270 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 16:04:40.622] I0211 16:04:40.621347   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 205 lines ...
W0211 16:05:19.242] I0211 16:05:19.242184   57631 controller_utils.go:1021] Waiting for caches to sync for ClusterRoleAggregator controller
I0211 16:05:19.343] +++ [0211 16:05:19] On try 3, controller-manager: ok
W0211 16:05:19.443] I0211 16:05:19.349568   57631 garbagecollector.go:130] Starting garbage collector controller
W0211 16:05:19.444] I0211 16:05:19.349622   57631 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 16:05:19.444] I0211 16:05:19.349569   57631 controllermanager.go:493] Started "garbagecollector"
W0211 16:05:19.444] I0211 16:05:19.349847   57631 graph_builder.go:308] GraphBuilder running
W0211 16:05:19.444] E0211 16:05:19.350439   57631 prometheus.go:138] failed to register depth metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_depth", help: "(Deprecated) Current depth of workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_depth" is not a valid metric name
W0211 16:05:19.444] E0211 16:05:19.350510   57631 prometheus.go:150] failed to register adds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_adds", help: "(Deprecated) Total number of adds handled by workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_adds" is not a valid metric name
W0211 16:05:19.445] E0211 16:05:19.350615   57631 prometheus.go:162] failed to register latency metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_queue_latency", help: "(Deprecated) How long an item stays in workqueuedisruption-recheck before being requested.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_queue_latency" is not a valid metric name
W0211 16:05:19.445] E0211 16:05:19.350674   57631 prometheus.go:174] failed to register work_duration metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_work_duration", help: "(Deprecated) How long processing an item from workqueuedisruption-recheck takes.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_work_duration" is not a valid metric name
W0211 16:05:19.446] E0211 16:05:19.350700   57631 prometheus.go:189] failed to register unfinished_work_seconds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_unfinished_work_seconds", help: "(Deprecated) How many seconds of work disruption-recheck has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_unfinished_work_seconds" is not a valid metric name
W0211 16:05:19.446] E0211 16:05:19.350730   57631 prometheus.go:202] failed to register longest_running_processor_microseconds metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for disruption-recheck been running.", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_longest_running_processor_microseconds" is not a valid metric name
W0211 16:05:19.446] E0211 16:05:19.350786   57631 prometheus.go:214] failed to register retries metric disruption-recheck: descriptor Desc{fqName: "disruption-recheck_retries", help: "(Deprecated) Total number of retries handled by workqueue: disruption-recheck", constLabels: {}, variableLabels: []} is invalid: "disruption-recheck_retries" is not a valid metric name
W0211 16:05:19.446] I0211 16:05:19.350866   57631 controllermanager.go:493] Started "disruption"
W0211 16:05:19.446] I0211 16:05:19.351159   57631 disruption.go:286] Starting disruption controller
W0211 16:05:19.447] I0211 16:05:19.351186   57631 controller_utils.go:1021] Waiting for caches to sync for disruption controller
W0211 16:05:19.447] I0211 16:05:19.351456   57631 controllermanager.go:493] Started "csrapproving"
W0211 16:05:19.447] I0211 16:05:19.351482   57631 certificate_controller.go:113] Starting certificate controller
W0211 16:05:19.447] I0211 16:05:19.351515   57631 controller_utils.go:1021] Waiting for caches to sync for certificate controller
... skipping 26 lines ...
W0211 16:05:19.451] I0211 16:05:19.409330   57631 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
W0211 16:05:19.451] I0211 16:05:19.409345   57631 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
W0211 16:05:19.451] I0211 16:05:19.409373   57631 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for jobs.batch
W0211 16:05:19.451] I0211 16:05:19.409424   57631 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
W0211 16:05:19.451] I0211 16:05:19.409458   57631 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
W0211 16:05:19.451] I0211 16:05:19.409476   57631 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.apps
W0211 16:05:19.452] E0211 16:05:19.409497   57631 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0211 16:05:19.452] I0211 16:05:19.409530   57631 controllermanager.go:493] Started "resourcequota"
W0211 16:05:19.452] W0211 16:05:19.409564   57631 controllermanager.go:485] Skipping "csrsigning"
W0211 16:05:19.452] I0211 16:05:19.409669   57631 resource_quota_controller.go:276] Starting resource quota controller
W0211 16:05:19.452] I0211 16:05:19.409744   57631 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0211 16:05:19.452] I0211 16:05:19.409785   57631 resource_quota_monitor.go:301] QuotaMonitor running
W0211 16:05:19.452] I0211 16:05:19.409847   57631 controllermanager.go:493] Started "csrcleaner"
W0211 16:05:19.453] I0211 16:05:19.409916   57631 cleaner.go:81] Starting CSR cleaner controller
W0211 16:05:19.453] E0211 16:05:19.410408   57631 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0211 16:05:19.453] W0211 16:05:19.410428   57631 controllermanager.go:485] Skipping "service"
W0211 16:05:19.453] I0211 16:05:19.410436   57631 core.go:172] Will not configure cloud provider routes for allocate-node-cidrs: false, configure-cloud-routes: true.
W0211 16:05:19.453] W0211 16:05:19.410440   57631 controllermanager.go:485] Skipping "route"
W0211 16:05:19.453] I0211 16:05:19.410865   57631 controllermanager.go:493] Started "job"
W0211 16:05:19.453] I0211 16:05:19.411151   57631 job_controller.go:143] Starting job controller
W0211 16:05:19.454] I0211 16:05:19.411174   57631 controller_utils.go:1021] Waiting for caches to sync for job controller
W0211 16:05:19.454] I0211 16:05:19.411469   57631 controllermanager.go:493] Started "replicaset"
W0211 16:05:19.454] I0211 16:05:19.411654   57631 replica_set.go:182] Starting replicaset controller
W0211 16:05:19.454] I0211 16:05:19.411757   57631 node_lifecycle_controller.go:77] Sending events to api server
W0211 16:05:19.454] I0211 16:05:19.411762   57631 controller_utils.go:1021] Waiting for caches to sync for ReplicaSet controller
W0211 16:05:19.454] E0211 16:05:19.411803   57631 core.go:162] failed to start cloud node lifecycle controller: no cloud provider provided
W0211 16:05:19.454] W0211 16:05:19.411810   57631 controllermanager.go:485] Skipping "cloud-node-lifecycle"
W0211 16:05:19.454] I0211 16:05:19.437547   57631 controller_utils.go:1028] Caches are synced for PV protection controller
W0211 16:05:19.457] I0211 16:05:19.456875   57631 controller_utils.go:1028] Caches are synced for expand controller
W0211 16:05:19.457] I0211 16:05:19.457311   57631 controller_utils.go:1028] Caches are synced for TTL controller
W0211 16:05:19.485] W0211 16:05:19.485050   57631 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0211 16:05:19.512] I0211 16:05:19.511501   57631 controller_utils.go:1028] Caches are synced for job controller
W0211 16:05:19.512] I0211 16:05:19.512331   57631 controller_utils.go:1028] Caches are synced for ReplicaSet controller
W0211 16:05:19.522] I0211 16:05:19.521879   57631 controller_utils.go:1028] Caches are synced for attach detach controller
W0211 16:05:19.523] I0211 16:05:19.522319   57631 controller_utils.go:1028] Caches are synced for GC controller
W0211 16:05:19.530] I0211 16:05:19.529724   57631 controller_utils.go:1028] Caches are synced for namespace controller
W0211 16:05:19.531] I0211 16:05:19.530888   57631 controller_utils.go:1028] Caches are synced for service account controller
W0211 16:05:19.532] I0211 16:05:19.531882   57631 controller_utils.go:1028] Caches are synced for endpoint controller
W0211 16:05:19.534] I0211 16:05:19.533658   54270 controller.go:606] quota admission added evaluator for: serviceaccounts
W0211 16:05:19.537] I0211 16:05:19.537278   57631 controller_utils.go:1028] Caches are synced for PVC protection controller
W0211 16:05:19.537] I0211 16:05:19.537428   57631 controller_utils.go:1028] Caches are synced for persistent volume controller
W0211 16:05:19.539] I0211 16:05:19.539195   57631 controller_utils.go:1028] Caches are synced for HPA controller
W0211 16:05:19.546] I0211 16:05:19.546234   57631 controller_utils.go:1028] Caches are synced for ClusterRoleAggregator controller
W0211 16:05:19.557] I0211 16:05:19.557041   57631 controller_utils.go:1028] Caches are synced for ReplicationController controller
W0211 16:05:19.565] E0211 16:05:19.564319   57631 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
W0211 16:05:19.653] I0211 16:05:19.651987   57631 controller_utils.go:1028] Caches are synced for certificate controller
W0211 16:05:19.732] I0211 16:05:19.731520   57631 controller_utils.go:1028] Caches are synced for daemon sets controller
W0211 16:05:19.741] I0211 16:05:19.740331   57631 controller_utils.go:1028] Caches are synced for stateful set controller
W0211 16:05:19.821] I0211 16:05:19.820453   57631 controller_utils.go:1028] Caches are synced for taint controller
W0211 16:05:19.821] I0211 16:05:19.820614   57631 node_lifecycle_controller.go:1113] Initializing eviction metric for zone: 
W0211 16:05:19.822] I0211 16:05:19.820636   57631 taint_manager.go:198] Starting NoExecuteTaintManager
... skipping 38 lines ...
I0211 16:05:20.663] +++ [0211 16:05:20] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I0211 16:05:20.814] Successful: --client --output json has correct client info
I0211 16:05:20.821] Successful: --client --output json has no server info
I0211 16:05:20.825] +++ [0211 16:05:20] Testing kubectl version: compare json output using additional --short flag
W0211 16:05:20.926] I0211 16:05:20.844241   57631 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 16:05:20.945] I0211 16:05:20.944744   57631 controller_utils.go:1028] Caches are synced for garbage collector controller
W0211 16:05:20.959] E0211 16:05:20.959007   57631 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0211 16:05:21.060] Successful: --short --output client json info is equal to non short result
I0211 16:05:21.060] Successful: --short --output server json info is equal to non short result
I0211 16:05:21.060] +++ [0211 16:05:20] Testing kubectl version: compare json output with yaml output
I0211 16:05:21.161] Successful: --output json/yaml has identical information
I0211 16:05:21.180] +++ exit code: 0
I0211 16:05:21.203] Recording: run_kubectl_config_set_tests
... skipping 42 lines ...
I0211 16:05:24.000] +++ working dir: /go/src/k8s.io/kubernetes
I0211 16:05:24.003] +++ command: run_RESTMapper_evaluation_tests
I0211 16:05:24.018] +++ [0211 16:05:24] Creating namespace namespace-1549901124-7610
I0211 16:05:24.098] namespace/namespace-1549901124-7610 created
I0211 16:05:24.172] Context "test" modified.
I0211 16:05:24.180] +++ [0211 16:05:24] Testing RESTMapper
I0211 16:05:24.317] +++ [0211 16:05:24] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0211 16:05:24.335] +++ exit code: 0
I0211 16:05:24.476] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0211 16:05:24.476] bindings                                                                      true         Binding
I0211 16:05:24.477] componentstatuses                 cs                                          false        ComponentStatus
I0211 16:05:24.477] configmaps                        cm                                          true         ConfigMap
I0211 16:05:24.477] endpoints                         ep                                          true         Endpoints
... skipping 585 lines ...
I0211 16:05:45.787] core.sh:186: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 16:05:45.996] core.sh:190: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 16:05:46.101] core.sh:194: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 16:05:46.283] core.sh:198: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 16:05:46.390] core.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 16:05:46.486] pod "valid-pod" force deleted
W0211 16:05:46.586] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0211 16:05:46.587] error: setting 'all' parameter but found a non empty selector. 
W0211 16:05:46.587] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 16:05:46.687] core.sh:206: Successful get pods -l'name in (valid-pod)' {{range.items}}{{$id_field}}:{{end}}: 
I0211 16:05:46.697] core.sh:211: Successful get namespaces {{range.items}}{{ if eq $id_field \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
I0211 16:05:46.775] namespace/test-kubectl-describe-pod created
I0211 16:05:46.880] core.sh:215: Successful get namespaces/test-kubectl-describe-pod {{.metadata.name}}: test-kubectl-describe-pod
I0211 16:05:46.984] core.sh:219: Successful get secrets --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 11 lines ...
I0211 16:05:48.026] poddisruptionbudget.policy/test-pdb-3 created
I0211 16:05:48.137] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0211 16:05:48.219] poddisruptionbudget.policy/test-pdb-4 created
I0211 16:05:48.327] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0211 16:05:48.512] core.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:05:48.712] pod/env-test-pod created
W0211 16:05:48.812] error: min-available and max-unavailable cannot be both specified
I0211 16:05:48.944] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0211 16:05:48.945] Name:               env-test-pod
I0211 16:05:48.945] Namespace:          test-kubectl-describe-pod
I0211 16:05:48.945] Priority:           0
I0211 16:05:48.945] PriorityClassName:  <none>
I0211 16:05:48.945] Node:               <none>
... skipping 145 lines ...
I0211 16:06:01.692] service "modified" deleted
I0211 16:06:01.788] replicationcontroller "modified" deleted
I0211 16:06:02.077] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:06:02.241] pod/valid-pod created
I0211 16:06:02.356] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 16:06:02.538] Successful
I0211 16:06:02.538] message:Error from server: cannot restore map from string
I0211 16:06:02.538] has:cannot restore map from string
W0211 16:06:02.639] E0211 16:06:02.529199   54270 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0211 16:06:02.739] Successful
I0211 16:06:02.740] message:pod/valid-pod patched (no change)
I0211 16:06:02.740] has:patched (no change)
I0211 16:06:02.740] pod/valid-pod patched
I0211 16:06:02.844] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 16:06:02.952] core.sh:457: Successful get pods {{range.items}}{{.metadata.annotations}}:{{end}}: map[kubernetes.io/change-cause:kubectl patch pod valid-pod --server=http://127.0.0.1:8080 --match-server-version=true --record=true --patch={"spec":{"containers":[{"name": "kubernetes-serve-hostname", "image": "nginx"}]}}]:
... skipping 4 lines ...
I0211 16:06:03.451] pod/valid-pod patched
I0211 16:06:03.563] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0211 16:06:03.658] pod/valid-pod patched
I0211 16:06:03.769] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0211 16:06:03.953] pod/valid-pod patched
I0211 16:06:04.068] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 16:06:04.272] +++ [0211 16:06:04] "kubectl patch with resourceVersion 502" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
I0211 16:06:04.553] pod "valid-pod" deleted
I0211 16:06:04.565] pod/valid-pod replaced
I0211 16:06:04.677] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0211 16:06:04.850] Successful
I0211 16:06:04.851] message:error: --grace-period must have --force specified
I0211 16:06:04.851] has:\-\-grace-period must have \-\-force specified
I0211 16:06:05.035] Successful
I0211 16:06:05.035] message:error: --timeout must have --force specified
I0211 16:06:05.036] has:\-\-timeout must have \-\-force specified
I0211 16:06:05.205] node/node-v1-test created
W0211 16:06:05.306] W0211 16:06:05.205022   57631 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0211 16:06:05.406] node/node-v1-test replaced
I0211 16:06:05.516] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0211 16:06:05.604] node "node-v1-test" deleted
I0211 16:06:05.716] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0211 16:06:06.033] core.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0211 16:06:07.115] core.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 16 lines ...
I0211 16:06:07.693] core.sh:593: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
I0211 16:06:07.785] pod/valid-pod labeled
W0211 16:06:07.886] Edit cancelled, no changes made.
W0211 16:06:07.886] Edit cancelled, no changes made.
W0211 16:06:07.886] Edit cancelled, no changes made.
W0211 16:06:07.886] Edit cancelled, no changes made.
W0211 16:06:07.887] error: 'name' already has a value (valid-pod), and --overwrite is false
I0211 16:06:07.987] core.sh:597: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
I0211 16:06:07.995] core.sh:601: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 16:06:08.090] pod "valid-pod" force deleted
W0211 16:06:08.191] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 16:06:08.291] core.sh:605: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:06:08.292] +++ [0211 16:06:08] Creating namespace namespace-1549901168-25518
... skipping 82 lines ...
I0211 16:06:15.879] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0211 16:06:15.882] +++ working dir: /go/src/k8s.io/kubernetes
I0211 16:06:15.884] +++ command: run_kubectl_create_error_tests
I0211 16:06:15.898] +++ [0211 16:06:15] Creating namespace namespace-1549901175-27665
I0211 16:06:15.978] namespace/namespace-1549901175-27665 created
I0211 16:06:16.062] Context "test" modified.
I0211 16:06:16.071] +++ [0211 16:06:16] Testing kubectl create with error
W0211 16:06:16.172] Error: required flag(s) "filename" not set
W0211 16:06:16.172] 
W0211 16:06:16.172] 
W0211 16:06:16.172] Examples:
W0211 16:06:16.172]   # Create a pod using the data in pod.json.
W0211 16:06:16.172]   kubectl create -f ./pod.json
W0211 16:06:16.172]   
... skipping 38 lines ...
W0211 16:06:16.177]   kubectl create -f FILENAME [options]
W0211 16:06:16.178] 
W0211 16:06:16.178] Use "kubectl <command> --help" for more information about a given command.
W0211 16:06:16.178] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0211 16:06:16.178] 
W0211 16:06:16.178] required flag(s) "filename" not set
I0211 16:06:16.324] +++ [0211 16:06:16] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0211 16:06:16.424] kubectl convert is DEPRECATED and will be removed in a future version.
W0211 16:06:16.425] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0211 16:06:16.525] +++ exit code: 0
I0211 16:06:16.555] Recording: run_kubectl_apply_tests
I0211 16:06:16.556] Running command: run_kubectl_apply_tests
I0211 16:06:16.581] 
... skipping 21 lines ...
W0211 16:06:18.899] I0211 16:06:18.346569   57631 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549901176-28207", Name:"test-deployment-retainkeys", UID:"f6ce5b20-2e16-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"513", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test-deployment-retainkeys-ddc987c6 to 1
W0211 16:06:18.900] I0211 16:06:18.350898   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901176-28207", Name:"test-deployment-retainkeys-ddc987c6", UID:"f739e0cf-2e16-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"516", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-deployment-retainkeys-ddc987c6-hcznd
I0211 16:06:19.000] apply.sh:67: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:06:19.091] pod/selector-test-pod created
I0211 16:06:19.202] apply.sh:71: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0211 16:06:19.305] Successful
I0211 16:06:19.305] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0211 16:06:19.305] has:pods "selector-test-pod-dont-apply" not found
I0211 16:06:19.393] pod "selector-test-pod" deleted
I0211 16:06:19.500] apply.sh:80: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:06:19.753] pod/test-pod created (server dry run)
I0211 16:06:19.865] apply.sh:85: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:06:20.044] pod/test-pod created
... skipping 4 lines ...
W0211 16:06:21.057] I0211 16:06:21.056897   54270 clientconn.go:551] parsed scheme: ""
W0211 16:06:21.058] I0211 16:06:21.056941   54270 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 16:06:21.058] I0211 16:06:21.056975   54270 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 16:06:21.058] I0211 16:06:21.057012   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:06:21.058] I0211 16:06:21.057492   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:06:21.064] I0211 16:06:21.064147   54270 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0211 16:06:21.117] E0211 16:06:21.115233   57631 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources"]
W0211 16:06:21.166] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
W0211 16:06:21.205] I0211 16:06:21.204212   57631 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 16:06:21.305] I0211 16:06:21.304613   57631 controller_utils.go:1028] Caches are synced for garbage collector controller
I0211 16:06:21.406] kind.mygroup.example.com/myobj created (server dry run)
I0211 16:06:21.406] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0211 16:06:21.406] apply.sh:129: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:06:21.561] pod/a created
I0211 16:06:22.880] apply.sh:134: Successful get pods a {{.metadata.name}}: a
I0211 16:06:22.989] Successful
I0211 16:06:22.989] message:Error from server (NotFound): pods "b" not found
I0211 16:06:22.989] has:pods "b" not found
I0211 16:06:23.172] pod/b created
I0211 16:06:23.189] pod/a pruned
I0211 16:06:24.693] apply.sh:142: Successful get pods b {{.metadata.name}}: b
I0211 16:06:24.794] Successful
I0211 16:06:24.794] message:Error from server (NotFound): pods "a" not found
I0211 16:06:24.794] has:pods "a" not found
I0211 16:06:24.884] pod "b" deleted
I0211 16:06:24.993] apply.sh:152: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:06:25.173] pod/a created
I0211 16:06:25.280] apply.sh:157: Successful get pods a {{.metadata.name}}: a
I0211 16:06:25.376] Successful
I0211 16:06:25.377] message:Error from server (NotFound): pods "b" not found
I0211 16:06:25.377] has:pods "b" not found
I0211 16:06:25.559] pod/b created
I0211 16:06:25.671] apply.sh:165: Successful get pods a {{.metadata.name}}: a
I0211 16:06:25.772] apply.sh:166: Successful get pods b {{.metadata.name}}: b
I0211 16:06:25.863] pod "a" deleted
I0211 16:06:25.869] pod "b" deleted
I0211 16:06:26.061] Successful
I0211 16:06:26.062] message:error: all resources selected for prune without explicitly passing --all. To prune all resources, pass the --all flag. If you did not mean to prune all resources, specify a label selector
I0211 16:06:26.062] has:all resources selected for prune without explicitly passing --all
I0211 16:06:26.238] pod/a created
I0211 16:06:26.247] pod/b created
I0211 16:06:26.258] service/prune-svc created
I0211 16:06:27.574] apply.sh:178: Successful get pods a {{.metadata.name}}: a
I0211 16:06:27.678] apply.sh:179: Successful get pods b {{.metadata.name}}: b
... skipping 137 lines ...
I0211 16:06:40.142] Context "test" modified.
I0211 16:06:40.150] +++ [0211 16:06:40] Testing kubectl create filter
I0211 16:06:40.252] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:06:40.422] pod/selector-test-pod created
I0211 16:06:40.535] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0211 16:06:40.632] Successful
I0211 16:06:40.632] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0211 16:06:40.632] has:pods "selector-test-pod-dont-apply" not found
I0211 16:06:40.717] pod "selector-test-pod" deleted
I0211 16:06:40.742] +++ exit code: 0
I0211 16:06:40.787] Recording: run_kubectl_apply_deployments_tests
I0211 16:06:40.788] Running command: run_kubectl_apply_deployments_tests
I0211 16:06:40.812] 
... skipping 26 lines ...
I0211 16:06:42.754] apps.sh:131: Successful get deployments my-depl {{.metadata.labels.l2}}: l2
I0211 16:06:42.853] deployment.extensions "my-depl" deleted
I0211 16:06:42.862] replicaset.extensions "my-depl-64775887d7" deleted
I0211 16:06:42.868] replicaset.extensions "my-depl-656cffcbcc" deleted
I0211 16:06:42.876] pod "my-depl-64775887d7-z92hb" deleted
I0211 16:06:42.881] pod "my-depl-656cffcbcc-lhjdw" deleted
W0211 16:06:42.982] E0211 16:06:42.880272   57631 replica_set.go:450] Sync "namespace-1549901200-12274/my-depl-656cffcbcc" failed with replicasets.apps "my-depl-656cffcbcc" not found
I0211 16:06:43.083] apps.sh:137: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:06:43.113] apps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:06:43.218] apps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:06:43.318] apps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:06:43.486] deployment.extensions/nginx created
W0211 16:06:43.586] I0211 16:06:43.488979   57631 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549901200-12274", Name:"nginx", UID:"0635e16f-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-776cc67f78 to 3
W0211 16:06:43.587] I0211 16:06:43.493018   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901200-12274", Name:"nginx-776cc67f78", UID:"0636619b-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-8gghh
W0211 16:06:43.587] I0211 16:06:43.496303   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901200-12274", Name:"nginx-776cc67f78", UID:"0636619b-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-tzwww
W0211 16:06:43.588] I0211 16:06:43.497908   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901200-12274", Name:"nginx-776cc67f78", UID:"0636619b-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-wtzl6
I0211 16:06:43.688] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0211 16:06:47.849] Successful
I0211 16:06:47.850] message:Error from server (Conflict): error when applying patch:
I0211 16:06:47.851] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1549901200-12274\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0211 16:06:47.851] to:
I0211 16:06:47.851] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0211 16:06:47.851] Name: "nginx", Namespace: "namespace-1549901200-12274"
I0211 16:06:47.852] Object: &{map["status":map["unavailableReplicas":'\x03' "conditions":[map["type":"Available" "status":"False" "lastUpdateTime":"2019-02-11T16:06:43Z" "lastTransitionTime":"2019-02-11T16:06:43Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability."]] "observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03'] "kind":"Deployment" "apiVersion":"extensions/v1beta1" "metadata":map["uid":"0635e16f-2e17-11e9-9664-0242ac110002" "creationTimestamp":"2019-02-11T16:06:43Z" "labels":map["name":"nginx"] "name":"nginx" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1549901200-12274/deployments/nginx" "generation":'\x01' "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1549901200-12274\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"] "namespace":"namespace-1549901200-12274" "resourceVersion":"722"] "spec":map["replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["labels":map["name":"nginx1"] "creationTimestamp":<nil>] "spec":map["containers":[map["image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[] "terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx"]] "restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler"]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647)]]}
I0211 16:06:47.853] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0211 16:06:47.853] has:Error from server (Conflict)
W0211 16:06:51.169] E0211 16:06:51.168236   57631 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0211 16:06:51.358] I0211 16:06:51.357691   57631 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 16:06:51.459] I0211 16:06:51.458239   57631 controller_utils.go:1028] Caches are synced for garbage collector controller
I0211 16:06:53.100] deployment.extensions/nginx configured
W0211 16:06:53.201] I0211 16:06:53.104209   57631 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549901200-12274", Name:"nginx", UID:"0bf0b819-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"745", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7bd4fbc645 to 3
W0211 16:06:53.201] I0211 16:06:53.112645   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901200-12274", Name:"nginx-7bd4fbc645", UID:"0bf15b7e-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-vsnd7
W0211 16:06:53.201] I0211 16:06:53.116567   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901200-12274", Name:"nginx-7bd4fbc645", UID:"0bf15b7e-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"746", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-v9qq4
... skipping 144 lines ...
I0211 16:07:00.702] +++ [0211 16:07:00] Creating namespace namespace-1549901220-24316
I0211 16:07:00.782] namespace/namespace-1549901220-24316 created
I0211 16:07:00.857] Context "test" modified.
I0211 16:07:00.864] +++ [0211 16:07:00] Testing kubectl get
I0211 16:07:00.965] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:01.060] Successful
I0211 16:07:01.060] message:Error from server (NotFound): pods "abc" not found
I0211 16:07:01.060] has:pods "abc" not found
I0211 16:07:01.161] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:01.259] Successful
I0211 16:07:01.259] message:Error from server (NotFound): pods "abc" not found
I0211 16:07:01.260] has:pods "abc" not found
I0211 16:07:01.362] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:01.457] Successful
I0211 16:07:01.457] message:{
I0211 16:07:01.458]     "apiVersion": "v1",
I0211 16:07:01.458]     "items": [],
... skipping 23 lines ...
I0211 16:07:01.845] has not:No resources found
I0211 16:07:01.939] Successful
I0211 16:07:01.939] message:NAME
I0211 16:07:01.940] has not:No resources found
I0211 16:07:02.045] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:02.182] Successful
I0211 16:07:02.183] message:error: the server doesn't have a resource type "foobar"
I0211 16:07:02.183] has not:No resources found
I0211 16:07:02.278] Successful
I0211 16:07:02.279] message:No resources found.
I0211 16:07:02.279] has:No resources found
I0211 16:07:02.380] Successful
I0211 16:07:02.380] message:
I0211 16:07:02.380] has not:No resources found
I0211 16:07:02.484] Successful
I0211 16:07:02.484] message:No resources found.
I0211 16:07:02.484] has:No resources found
I0211 16:07:02.590] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:02.691] Successful
I0211 16:07:02.691] message:Error from server (NotFound): pods "abc" not found
I0211 16:07:02.691] has:pods "abc" not found
I0211 16:07:02.693] FAIL!
I0211 16:07:02.694] message:Error from server (NotFound): pods "abc" not found
I0211 16:07:02.694] has not:List
I0211 16:07:02.694] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0211 16:07:02.836] Successful
I0211 16:07:02.836] message:I0211 16:07:02.771939   69992 loader.go:359] Config loaded from file /tmp/tmp.IEzPcKmyRi/.kube/config
I0211 16:07:02.836] I0211 16:07:02.773618   69992 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0211 16:07:02.836] I0211 16:07:02.803734   69992 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 653 lines ...
I0211 16:07:06.475] }
I0211 16:07:06.577] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 16:07:06.854] <no value>Successful
I0211 16:07:06.855] message:valid-pod:
I0211 16:07:06.855] has:valid-pod:
I0211 16:07:06.948] Successful
I0211 16:07:06.949] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0211 16:07:06.949] 	template was:
I0211 16:07:06.949] 		{.missing}
I0211 16:07:06.949] 	object given to jsonpath engine was:
I0211 16:07:06.950] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"resourceVersion":"820", "creationTimestamp":"2019-02-11T16:07:06Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1549901225-4662", "selfLink":"/api/v1/namespaces/namespace-1549901225-4662/pods/valid-pod", "uid":"13d9f697-2e17-11e9-9664-0242ac110002"}, "spec":map[string]interface {}{"restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0, "enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File", "imagePullPolicy":"Always"}}}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
I0211 16:07:06.950] has:missing is not found
I0211 16:07:07.046] Successful
I0211 16:07:07.046] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0211 16:07:07.046] 	template was:
I0211 16:07:07.046] 		{{.missing}}
I0211 16:07:07.047] 	raw data was:
I0211 16:07:07.047] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-02-11T16:07:06Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1549901225-4662","resourceVersion":"820","selfLink":"/api/v1/namespaces/namespace-1549901225-4662/pods/valid-pod","uid":"13d9f697-2e17-11e9-9664-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0211 16:07:07.047] 	object given to template engine was:
I0211 16:07:07.048] 		map[apiVersion:v1 kind:Pod metadata:map[selfLink:/api/v1/namespaces/namespace-1549901225-4662/pods/valid-pod uid:13d9f697-2e17-11e9-9664-0242ac110002 creationTimestamp:2019-02-11T16:07:06Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1549901225-4662 resourceVersion:820] spec:map[priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname]] dnsPolicy:ClusterFirst enableServiceLinks:true] status:map[phase:Pending qosClass:Guaranteed]]
I0211 16:07:07.048] has:map has no entry for key "missing"
W0211 16:07:07.149] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0211 16:07:08.138] E0211 16:07:08.137600   70382 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0211 16:07:08.239] Successful
I0211 16:07:08.239] message:NAME        READY   STATUS    RESTARTS   AGE
I0211 16:07:08.239] valid-pod   0/1     Pending   0          1s
I0211 16:07:08.239] has:STATUS
I0211 16:07:08.239] Successful
... skipping 80 lines ...
I0211 16:07:10.447]   terminationGracePeriodSeconds: 30
I0211 16:07:10.447] status:
I0211 16:07:10.447]   phase: Pending
I0211 16:07:10.448]   qosClass: Guaranteed
I0211 16:07:10.448] has:name: valid-pod
I0211 16:07:10.448] Successful
I0211 16:07:10.448] message:Error from server (NotFound): pods "invalid-pod" not found
I0211 16:07:10.448] has:"invalid-pod" not found
I0211 16:07:10.530] pod "valid-pod" deleted
I0211 16:07:10.638] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:10.808] pod/redis-master created
I0211 16:07:10.818] pod/valid-pod created
I0211 16:07:10.918] Successful
... skipping 254 lines ...
I0211 16:07:15.784] Running command: run_create_secret_tests
I0211 16:07:15.807] 
I0211 16:07:15.809] +++ Running case: test-cmd.run_create_secret_tests 
I0211 16:07:15.812] +++ working dir: /go/src/k8s.io/kubernetes
I0211 16:07:15.815] +++ command: run_create_secret_tests
I0211 16:07:15.915] Successful
I0211 16:07:15.916] message:Error from server (NotFound): secrets "mysecret" not found
I0211 16:07:15.916] has:secrets "mysecret" not found
I0211 16:07:16.088] Successful
I0211 16:07:16.088] message:Error from server (NotFound): secrets "mysecret" not found
I0211 16:07:16.089] has:secrets "mysecret" not found
I0211 16:07:16.091] Successful
I0211 16:07:16.091] message:user-specified
I0211 16:07:16.092] has:user-specified
I0211 16:07:16.172] Successful
I0211 16:07:16.253] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"19bd53fd-2e17-11e9-9664-0242ac110002","resourceVersion":"895","creationTimestamp":"2019-02-11T16:07:16Z"}}
... skipping 99 lines ...
I0211 16:07:19.405] has:Timeout exceeded while reading body
I0211 16:07:19.502] Successful
I0211 16:07:19.502] message:NAME        READY   STATUS    RESTARTS   AGE
I0211 16:07:19.502] valid-pod   0/1     Pending   0          1s
I0211 16:07:19.502] has:valid-pod
I0211 16:07:19.586] Successful
I0211 16:07:19.586] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0211 16:07:19.586] has:Invalid timeout value
I0211 16:07:19.674] pod "valid-pod" deleted
I0211 16:07:19.701] +++ exit code: 0
I0211 16:07:19.743] Recording: run_crd_tests
I0211 16:07:19.743] Running command: run_crd_tests
I0211 16:07:19.772] 
... skipping 16 lines ...
I0211 16:07:21.229] namespace/namespace-1549901241-19736 created
I0211 16:07:21.316] Context "test" modified.
I0211 16:07:21.324] +++ [0211 16:07:21] Testing kubectl non-native resources
I0211 16:07:21.405] {"kind":"APIResourceList","apiVersion":"v1","groupVersion":"company.com/v1","resources":[{"name":"bars","singularName":"bar","namespaced":true,"kind":"Bar","verbs":["delete","deletecollection","get","list","patch","create","update","watch"]},{"name":"validfoos","singularName":"validfoo","namespaced":true,"kind":"ValidFoo","verbs":["delete","deletecollection","get","list","patch","create","update","watch"]},{"name":"foos","singularName":"foo","namespaced":true,"kind":"Foo","verbs":["delete","deletecollection","get","list","patch","create","update","watch"]}]}
I0211 16:07:21.484] {"apiVersion":"company.com/v1","items":[],"kind":"FooList","metadata":{"continue":"","resourceVersion":"931","selfLink":"/apis/company.com/v1/foos"}}
I0211 16:07:21.566] {"apiVersion":"company.com/v1","items":[],"kind":"BarList","metadata":{"continue":"","resourceVersion":"931","selfLink":"/apis/company.com/v1/bars"}}
W0211 16:07:21.667] E0211 16:07:21.322058   57631 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "mygroup.example.com/v1alpha1, Resource=resources": unable to monitor quota for resource "mygroup.example.com/v1alpha1, Resource=resources", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=bars": unable to monitor quota for resource "company.com/v1, Resource=bars", couldn't start monitor for resource "company.com/v1, Resource=validfoos": unable to monitor quota for resource "company.com/v1, Resource=validfoos", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos"]
W0211 16:07:21.668] I0211 16:07:21.481173   54270 clientconn.go:551] parsed scheme: ""
W0211 16:07:21.668] I0211 16:07:21.481218   54270 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0211 16:07:21.668] I0211 16:07:21.481266   54270 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0211 16:07:21.668] I0211 16:07:21.481307   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:07:21.668] I0211 16:07:21.481892   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:07:21.668] I0211 16:07:21.562028   54270 clientconn.go:551] parsed scheme: ""
... skipping 146 lines ...
I0211 16:07:24.816] foo.company.com/test patched
I0211 16:07:24.921] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0211 16:07:25.015] foo.company.com/test patched
I0211 16:07:25.123] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0211 16:07:25.220] foo.company.com/test patched
I0211 16:07:25.331] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0211 16:07:25.518] +++ [0211 16:07:25] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0211 16:07:25.586] {
I0211 16:07:25.587]     "apiVersion": "company.com/v1",
I0211 16:07:25.587]     "kind": "Foo",
I0211 16:07:25.587]     "metadata": {
I0211 16:07:25.587]         "annotations": {
I0211 16:07:25.587]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 179 lines ...
I0211 16:07:34.003] namespace/non-native-resources created
I0211 16:07:34.182] bar.company.com/test created
I0211 16:07:34.294] crd.sh:456: Successful get bars {{len .items}}: 1
I0211 16:07:34.379] namespace "non-native-resources" deleted
I0211 16:07:39.684] crd.sh:459: Successful get bars {{len .items}}: 0
I0211 16:07:39.892] customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0211 16:07:39.993] Error from server (NotFound): namespaces "non-native-resources" not found
I0211 16:07:40.094] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0211 16:07:40.159] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0211 16:07:40.299] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0211 16:07:40.345] +++ exit code: 0
I0211 16:07:40.427] Recording: run_cmd_with_img_tests
I0211 16:07:40.427] Running command: run_cmd_with_img_tests
... skipping 10 lines ...
W0211 16:07:40.796] I0211 16:07:40.795161   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901260-12558", Name:"test1-848d5d4b47", UID:"285d890c-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1012", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-848d5d4b47-fm597
I0211 16:07:40.897] Successful
I0211 16:07:40.897] message:deployment.apps/test1 created
I0211 16:07:40.897] has:deployment.apps/test1 created
I0211 16:07:40.910] deployment.extensions "test1" deleted
I0211 16:07:41.016] Successful
I0211 16:07:41.017] message:error: Invalid image name "InvalidImageName": invalid reference format
I0211 16:07:41.017] has:error: Invalid image name "InvalidImageName": invalid reference format
I0211 16:07:41.039] +++ exit code: 0
I0211 16:07:41.109] +++ [0211 16:07:41] Testing recursive resources
I0211 16:07:41.118] +++ [0211 16:07:41] Creating namespace namespace-1549901261-454
I0211 16:07:41.212] namespace/namespace-1549901261-454 created
I0211 16:07:41.302] Context "test" modified.
I0211 16:07:41.427] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:41.781] generic-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:41.785] Successful
I0211 16:07:41.785] message:pod/busybox0 created
I0211 16:07:41.785] pod/busybox1 created
I0211 16:07:41.785] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 16:07:41.785] has:error validating data: kind not set
I0211 16:07:41.907] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:42.137] generic-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0211 16:07:42.140] Successful
I0211 16:07:42.141] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 16:07:42.141] has:Object 'Kind' is missing
I0211 16:07:42.269] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:42.656] generic-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0211 16:07:42.660] Successful
I0211 16:07:42.660] message:pod/busybox0 replaced
I0211 16:07:42.660] pod/busybox1 replaced
I0211 16:07:42.661] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 16:07:42.661] has:error validating data: kind not set
I0211 16:07:42.786] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:42.942] Successful
I0211 16:07:42.942] message:Name:               busybox0
I0211 16:07:42.942] Namespace:          namespace-1549901261-454
I0211 16:07:42.943] Priority:           0
I0211 16:07:42.943] PriorityClassName:  <none>
... skipping 159 lines ...
I0211 16:07:42.968] has:Object 'Kind' is missing
I0211 16:07:43.104] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:43.361] generic-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0211 16:07:43.364] Successful
I0211 16:07:43.365] message:pod/busybox0 annotated
I0211 16:07:43.365] pod/busybox1 annotated
I0211 16:07:43.365] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 16:07:43.365] has:Object 'Kind' is missing
I0211 16:07:43.487] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:43.847] generic-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0211 16:07:43.850] Successful
I0211 16:07:43.851] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0211 16:07:43.851] pod/busybox0 configured
I0211 16:07:43.851] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0211 16:07:43.851] pod/busybox1 configured
I0211 16:07:43.852] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0211 16:07:43.852] has:error validating data: kind not set
I0211 16:07:43.970] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:44.165] deployment.apps/nginx created
W0211 16:07:44.266] I0211 16:07:44.171064   57631 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549901261-454", Name:"nginx", UID:"2a609bf4-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1037", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5f7cff5b56 to 3
W0211 16:07:44.267] I0211 16:07:44.175998   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901261-454", Name:"nginx-5f7cff5b56", UID:"2a61847c-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1038", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-jb7g2
W0211 16:07:44.267] I0211 16:07:44.179931   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901261-454", Name:"nginx-5f7cff5b56", UID:"2a61847c-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1038", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-kmqxm
W0211 16:07:44.268] I0211 16:07:44.180137   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901261-454", Name:"nginx-5f7cff5b56", UID:"2a61847c-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1038", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-5f7cff5b56-d7r48
... skipping 49 lines ...
W0211 16:07:44.875] I0211 16:07:44.566398   57631 namespace_controller.go:171] Namespace has been deleted non-native-resources
I0211 16:07:44.975] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:45.125] generic-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:45.128] Successful
I0211 16:07:45.128] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0211 16:07:45.128] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0211 16:07:45.129] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 16:07:45.129] has:Object 'Kind' is missing
I0211 16:07:45.253] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:45.370] Successful
I0211 16:07:45.370] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 16:07:45.371] has:busybox0:busybox1:
I0211 16:07:45.372] Successful
I0211 16:07:45.373] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 16:07:45.373] has:Object 'Kind' is missing
I0211 16:07:45.486] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:45.594] pod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 16:07:45.698] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0211 16:07:45.701] Successful
I0211 16:07:45.701] message:pod/busybox0 labeled
I0211 16:07:45.701] pod/busybox1 labeled
I0211 16:07:45.702] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 16:07:45.702] has:Object 'Kind' is missing
I0211 16:07:45.806] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:45.903] pod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 16:07:46.004] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0211 16:07:46.006] Successful
I0211 16:07:46.007] message:pod/busybox0 patched
I0211 16:07:46.007] pod/busybox1 patched
I0211 16:07:46.007] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 16:07:46.007] has:Object 'Kind' is missing
I0211 16:07:46.110] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:46.311] generic-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:46.314] Successful
I0211 16:07:46.315] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 16:07:46.315] pod "busybox0" force deleted
I0211 16:07:46.315] pod "busybox1" force deleted
I0211 16:07:46.316] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0211 16:07:46.316] has:Object 'Kind' is missing
I0211 16:07:46.416] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:46.584] replicationcontroller/busybox0 created
I0211 16:07:46.588] replicationcontroller/busybox1 created
W0211 16:07:46.688] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 16:07:46.689] I0211 16:07:46.587986   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549901261-454", Name:"busybox0", UID:"2bd1de05-2e17-11e9-9664-0242ac110002", APIVersion:"v1", ResourceVersion:"1068", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-44q9l
W0211 16:07:46.690] I0211 16:07:46.591121   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549901261-454", Name:"busybox1", UID:"2bd29200-2e17-11e9-9664-0242ac110002", APIVersion:"v1", ResourceVersion:"1070", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-g68bj
I0211 16:07:46.790] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:46.806] generic-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:46.908] generic-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 16:07:47.014] generic-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 16:07:47.220] generic-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0211 16:07:47.321] generic-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0211 16:07:47.324] Successful
I0211 16:07:47.324] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0211 16:07:47.324] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0211 16:07:47.325] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 16:07:47.325] has:Object 'Kind' is missing
I0211 16:07:47.415] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0211 16:07:47.506] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0211 16:07:47.612] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:47.713] generic-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 16:07:47.812] generic-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 16:07:48.024] generic-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0211 16:07:48.128] generic-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0211 16:07:48.130] Successful
I0211 16:07:48.131] message:service/busybox0 exposed
I0211 16:07:48.131] service/busybox1 exposed
I0211 16:07:48.131] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 16:07:48.131] has:Object 'Kind' is missing
I0211 16:07:48.237] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:48.342] generic-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0211 16:07:48.448] generic-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0211 16:07:48.674] generic-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0211 16:07:48.783] generic-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0211 16:07:48.786] Successful
I0211 16:07:48.786] message:replicationcontroller/busybox0 scaled
I0211 16:07:48.787] replicationcontroller/busybox1 scaled
I0211 16:07:48.787] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 16:07:48.787] has:Object 'Kind' is missing
I0211 16:07:48.893] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:49.125] generic-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:49.129] Successful
I0211 16:07:49.129] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 16:07:49.129] replicationcontroller "busybox0" force deleted
I0211 16:07:49.129] replicationcontroller "busybox1" force deleted
I0211 16:07:49.130] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 16:07:49.130] has:Object 'Kind' is missing
I0211 16:07:49.233] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:49.399] deployment.apps/nginx1-deployment created
I0211 16:07:49.404] deployment.apps/nginx0-deployment created
W0211 16:07:49.505] I0211 16:07:48.556673   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549901261-454", Name:"busybox0", UID:"2bd1de05-2e17-11e9-9664-0242ac110002", APIVersion:"v1", ResourceVersion:"1089", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-9j4fs
W0211 16:07:49.505] I0211 16:07:48.567470   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549901261-454", Name:"busybox1", UID:"2bd29200-2e17-11e9-9664-0242ac110002", APIVersion:"v1", ResourceVersion:"1093", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-wv2dn
W0211 16:07:49.505] I0211 16:07:49.403395   57631 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549901261-454", Name:"nginx1-deployment", UID:"2d7f518b-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1109", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7c76c6cbb8 to 2
W0211 16:07:49.506] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 16:07:49.506] I0211 16:07:49.406994   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901261-454", Name:"nginx1-deployment-7c76c6cbb8", UID:"2d801206-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1110", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-jm8gx
W0211 16:07:49.506] I0211 16:07:49.409684   57631 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549901261-454", Name:"nginx0-deployment", UID:"2d801d71-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1111", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-7bb85585d7 to 2
W0211 16:07:49.507] I0211 16:07:49.410455   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901261-454", Name:"nginx1-deployment-7c76c6cbb8", UID:"2d801206-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1110", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-dm6q2
W0211 16:07:49.507] I0211 16:07:49.411459   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901261-454", Name:"nginx0-deployment-7bb85585d7", UID:"2d80cff6-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1115", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-kqjht
W0211 16:07:49.507] I0211 16:07:49.414320   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901261-454", Name:"nginx0-deployment-7bb85585d7", UID:"2d80cff6-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1115", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-l96l6
I0211 16:07:49.608] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0211 16:07:49.632] generic-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0211 16:07:49.860] generic-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0211 16:07:49.863] Successful
I0211 16:07:49.864] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0211 16:07:49.864] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0211 16:07:49.864] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 16:07:49.865] has:Object 'Kind' is missing
I0211 16:07:49.965] deployment.apps/nginx1-deployment paused
I0211 16:07:49.969] deployment.apps/nginx0-deployment paused
I0211 16:07:50.093] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0211 16:07:50.096] Successful
I0211 16:07:50.097] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0211 16:07:50.449] 1         <none>
I0211 16:07:50.449] 
I0211 16:07:50.449] deployment.apps/nginx0-deployment 
I0211 16:07:50.449] REVISION  CHANGE-CAUSE
I0211 16:07:50.449] 1         <none>
I0211 16:07:50.449] 
I0211 16:07:50.450] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 16:07:50.450] has:nginx0-deployment
I0211 16:07:50.451] Successful
I0211 16:07:50.452] message:deployment.apps/nginx1-deployment 
I0211 16:07:50.452] REVISION  CHANGE-CAUSE
I0211 16:07:50.452] 1         <none>
I0211 16:07:50.452] 
I0211 16:07:50.452] deployment.apps/nginx0-deployment 
I0211 16:07:50.452] REVISION  CHANGE-CAUSE
I0211 16:07:50.452] 1         <none>
I0211 16:07:50.452] 
I0211 16:07:50.453] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 16:07:50.453] has:nginx1-deployment
I0211 16:07:50.454] Successful
I0211 16:07:50.454] message:deployment.apps/nginx1-deployment 
I0211 16:07:50.454] REVISION  CHANGE-CAUSE
I0211 16:07:50.455] 1         <none>
I0211 16:07:50.455] 
I0211 16:07:50.455] deployment.apps/nginx0-deployment 
I0211 16:07:50.455] REVISION  CHANGE-CAUSE
I0211 16:07:50.455] 1         <none>
I0211 16:07:50.455] 
I0211 16:07:50.455] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0211 16:07:50.455] has:Object 'Kind' is missing
I0211 16:07:50.544] deployment.apps "nginx1-deployment" force deleted
I0211 16:07:50.550] deployment.apps "nginx0-deployment" force deleted
W0211 16:07:50.651] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 16:07:50.651] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
W0211 16:07:51.375] E0211 16:07:51.374919   57631 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0211 16:07:51.661] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:51.825] replicationcontroller/busybox0 created
I0211 16:07:51.830] replicationcontroller/busybox1 created
W0211 16:07:51.931] I0211 16:07:51.828534   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549901261-454", Name:"busybox0", UID:"2ef167c7-2e17-11e9-9664-0242ac110002", APIVersion:"v1", ResourceVersion:"1159", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-ht662
W0211 16:07:51.931] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0211 16:07:51.932] I0211 16:07:51.834350   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549901261-454", Name:"busybox1", UID:"2ef2636f-2e17-11e9-9664-0242ac110002", APIVersion:"v1", ResourceVersion:"1161", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-pttvg
W0211 16:07:51.932] I0211 16:07:51.916452   57631 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0211 16:07:52.018] I0211 16:07:52.016848   57631 controller_utils.go:1028] Caches are synced for garbage collector controller
I0211 16:07:52.119] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0211 16:07:52.119] Successful
I0211 16:07:52.119] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0211 16:07:52.120] message:no rollbacker has been implemented for "ReplicationController"
I0211 16:07:52.120] no rollbacker has been implemented for "ReplicationController"
I0211 16:07:52.121] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 16:07:52.121] has:Object 'Kind' is missing
I0211 16:07:52.170] Successful
I0211 16:07:52.171] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 16:07:52.171] error: replicationcontrollers "busybox0" pausing is not supported
I0211 16:07:52.171] error: replicationcontrollers "busybox1" pausing is not supported
I0211 16:07:52.171] has:Object 'Kind' is missing
I0211 16:07:52.173] Successful
I0211 16:07:52.174] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 16:07:52.174] error: replicationcontrollers "busybox0" pausing is not supported
I0211 16:07:52.174] error: replicationcontrollers "busybox1" pausing is not supported
I0211 16:07:52.175] has:replicationcontrollers "busybox0" pausing is not supported
I0211 16:07:52.176] Successful
I0211 16:07:52.177] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 16:07:52.177] error: replicationcontrollers "busybox0" pausing is not supported
I0211 16:07:52.177] error: replicationcontrollers "busybox1" pausing is not supported
I0211 16:07:52.177] has:replicationcontrollers "busybox1" pausing is not supported
I0211 16:07:52.283] Successful
I0211 16:07:52.283] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 16:07:52.284] error: replicationcontrollers "busybox0" resuming is not supported
I0211 16:07:52.284] error: replicationcontrollers "busybox1" resuming is not supported
I0211 16:07:52.284] has:Object 'Kind' is missing
I0211 16:07:52.286] Successful
I0211 16:07:52.286] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 16:07:52.287] error: replicationcontrollers "busybox0" resuming is not supported
I0211 16:07:52.287] error: replicationcontrollers "busybox1" resuming is not supported
I0211 16:07:52.287] has:replicationcontrollers "busybox0" resuming is not supported
I0211 16:07:52.289] Successful
I0211 16:07:52.289] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 16:07:52.289] error: replicationcontrollers "busybox0" resuming is not supported
I0211 16:07:52.289] error: replicationcontrollers "busybox1" resuming is not supported
I0211 16:07:52.290] has:replicationcontrollers "busybox0" resuming is not supported
I0211 16:07:52.384] replicationcontroller "busybox0" force deleted
I0211 16:07:52.390] replicationcontroller "busybox1" force deleted
W0211 16:07:52.491] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0211 16:07:52.491] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0211 16:07:53.398] Recording: run_namespace_tests
I0211 16:07:53.399] Running command: run_namespace_tests
I0211 16:07:53.424] 
I0211 16:07:53.426] +++ Running case: test-cmd.run_namespace_tests 
I0211 16:07:53.429] +++ working dir: /go/src/k8s.io/kubernetes
I0211 16:07:53.433] +++ command: run_namespace_tests
I0211 16:07:53.444] +++ [0211 16:07:53] Testing kubectl(v1:namespaces)
I0211 16:07:53.525] namespace/my-namespace created
I0211 16:07:53.633] core.sh:1295: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0211 16:07:53.726] namespace "my-namespace" deleted
I0211 16:07:58.915] namespace/my-namespace condition met
I0211 16:07:59.016] Successful
I0211 16:07:59.016] message:Error from server (NotFound): namespaces "my-namespace" not found
I0211 16:07:59.016] has: not found
I0211 16:07:59.137] core.sh:1310: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0211 16:07:59.215] namespace/other created
I0211 16:07:59.322] core.sh:1314: Successful get namespaces/other {{.metadata.name}}: other
I0211 16:07:59.427] core.sh:1318: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:07:59.601] pod/valid-pod created
I0211 16:07:59.718] core.sh:1322: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 16:07:59.825] core.sh:1324: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 16:07:59.919] Successful
I0211 16:07:59.919] message:error: a resource cannot be retrieved by name across all namespaces
I0211 16:07:59.919] has:a resource cannot be retrieved by name across all namespaces
I0211 16:08:00.027] core.sh:1331: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0211 16:08:00.120] pod "valid-pod" force deleted
W0211 16:08:00.221] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0211 16:08:00.322] core.sh:1335: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:08:00.322] namespace "other" deleted
... skipping 115 lines ...
I0211 16:08:22.151] +++ command: run_client_config_tests
I0211 16:08:22.168] +++ [0211 16:08:22] Creating namespace namespace-1549901302-10204
I0211 16:08:22.253] namespace/namespace-1549901302-10204 created
I0211 16:08:22.334] Context "test" modified.
I0211 16:08:22.344] +++ [0211 16:08:22] Testing client config
I0211 16:08:22.425] Successful
I0211 16:08:22.425] message:error: stat missing: no such file or directory
I0211 16:08:22.425] has:missing: no such file or directory
I0211 16:08:22.515] Successful
I0211 16:08:22.516] message:error: stat missing: no such file or directory
I0211 16:08:22.516] has:missing: no such file or directory
I0211 16:08:22.596] Successful
I0211 16:08:22.596] message:error: stat missing: no such file or directory
I0211 16:08:22.596] has:missing: no such file or directory
I0211 16:08:22.678] Successful
I0211 16:08:22.678] message:Error in configuration: context was not found for specified context: missing-context
I0211 16:08:22.678] has:context was not found for specified context: missing-context
I0211 16:08:22.767] Successful
I0211 16:08:22.767] message:error: no server found for cluster "missing-cluster"
I0211 16:08:22.768] has:no server found for cluster "missing-cluster"
I0211 16:08:22.849] Successful
I0211 16:08:22.849] message:error: auth info "missing-user" does not exist
I0211 16:08:22.849] has:auth info "missing-user" does not exist
I0211 16:08:23.025] Successful
I0211 16:08:23.026] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0211 16:08:23.026] has:Error loading config file
I0211 16:08:23.117] Successful
I0211 16:08:23.117] message:error: stat missing-config: no such file or directory
I0211 16:08:23.117] has:no such file or directory
I0211 16:08:23.136] +++ exit code: 0
I0211 16:08:23.186] Recording: run_service_accounts_tests
I0211 16:08:23.186] Running command: run_service_accounts_tests
I0211 16:08:23.218] 
I0211 16:08:23.220] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 35 lines ...
I0211 16:08:30.407] Labels:                        run=pi
I0211 16:08:30.407] Annotations:                   <none>
I0211 16:08:30.407] Schedule:                      59 23 31 2 *
I0211 16:08:30.408] Concurrency Policy:            Allow
I0211 16:08:30.408] Suspend:                       False
I0211 16:08:30.408] Successful Job History Limit:  824641218360
I0211 16:08:30.408] Failed Job History Limit:      1
I0211 16:08:30.408] Starting Deadline Seconds:     <unset>
I0211 16:08:30.408] Selector:                      <unset>
I0211 16:08:30.408] Parallelism:                   <unset>
I0211 16:08:30.408] Completions:                   <unset>
I0211 16:08:30.409] Pod Template:
I0211 16:08:30.409]   Labels:  run=pi
... skipping 32 lines ...
I0211 16:08:31.019]                 job-name=test-job
I0211 16:08:31.019]                 run=pi
I0211 16:08:31.019] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0211 16:08:31.019] Parallelism:    1
I0211 16:08:31.019] Completions:    1
I0211 16:08:31.019] Start Time:     Mon, 11 Feb 2019 16:08:30 +0000
I0211 16:08:31.020] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0211 16:08:31.020] Pod Template:
I0211 16:08:31.020]   Labels:  controller-uid=46106633-2e17-11e9-9664-0242ac110002
I0211 16:08:31.020]            job-name=test-job
I0211 16:08:31.020]            run=pi
I0211 16:08:31.020]   Containers:
I0211 16:08:31.020]    pi:
... skipping 327 lines ...
I0211 16:08:41.253]   selector:
I0211 16:08:41.253]     role: padawan
I0211 16:08:41.253]   sessionAffinity: None
I0211 16:08:41.253]   type: ClusterIP
I0211 16:08:41.253] status:
I0211 16:08:41.253]   loadBalancer: {}
W0211 16:08:41.354] error: you must specify resources by --filename when --local is set.
W0211 16:08:41.354] Example resource specifications include:
W0211 16:08:41.354]    '-f rsrc.yaml'
W0211 16:08:41.354]    '--filename=rsrc.json'
W0211 16:08:41.371] I0211 16:08:41.370911   57631 namespace_controller.go:171] Namespace has been deleted test-jobs
I0211 16:08:41.472] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0211 16:08:41.637] core.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
... skipping 91 lines ...
I0211 16:08:48.340]   Volumes:	<none>
I0211 16:08:48.340]  (dry run)
I0211 16:08:48.449] apps.sh:79: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0211 16:08:48.555] apps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 16:08:48.665] apps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0211 16:08:48.782] daemonset.extensions/bind rolled back
W0211 16:08:48.886] E0211 16:08:48.799211   57631 daemon_controller.go:302] namespace-1549901326-5848/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1549901326-5848", SelfLink:"/apis/apps/v1/namespaces/namespace-1549901326-5848/daemonsets/bind", UID:"4fe9204a-2e17-11e9-9664-0242ac110002", ResourceVersion:"1378", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63685498127, loc:(*time.Location)(0x69f3f40)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1549901326-5848\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true", "deprecated.daemonset.template.generation":"3"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc0007f51e0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002f9d1e8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00328bda0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc0007f5280), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000c919a8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002f9d260)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
I0211 16:08:48.986] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 16:08:49.005] apps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 16:08:49.126] Successful
I0211 16:08:49.126] message:error: unable to find specified revision 1000000 in history
I0211 16:08:49.127] has:unable to find specified revision
I0211 16:08:49.232] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 16:08:49.343] apps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 16:08:49.463] daemonset.extensions/bind rolled back
I0211 16:08:49.579] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0211 16:08:49.689] apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 28 lines ...
I0211 16:08:51.234] Namespace:    namespace-1549901330-1061
I0211 16:08:51.235] Selector:     app=guestbook,tier=frontend
I0211 16:08:51.235] Labels:       app=guestbook
I0211 16:08:51.235]               tier=frontend
I0211 16:08:51.235] Annotations:  <none>
I0211 16:08:51.235] Replicas:     3 current / 3 desired
I0211 16:08:51.235] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:08:51.235] Pod Template:
I0211 16:08:51.235]   Labels:  app=guestbook
I0211 16:08:51.236]            tier=frontend
I0211 16:08:51.236]   Containers:
I0211 16:08:51.236]    php-redis:
I0211 16:08:51.236]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 16:08:51.366] Namespace:    namespace-1549901330-1061
I0211 16:08:51.366] Selector:     app=guestbook,tier=frontend
I0211 16:08:51.366] Labels:       app=guestbook
I0211 16:08:51.366]               tier=frontend
I0211 16:08:51.366] Annotations:  <none>
I0211 16:08:51.366] Replicas:     3 current / 3 desired
I0211 16:08:51.366] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:08:51.366] Pod Template:
I0211 16:08:51.367]   Labels:  app=guestbook
I0211 16:08:51.367]            tier=frontend
I0211 16:08:51.367]   Containers:
I0211 16:08:51.367]    php-redis:
I0211 16:08:51.367]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0211 16:08:51.490] Namespace:    namespace-1549901330-1061
I0211 16:08:51.490] Selector:     app=guestbook,tier=frontend
I0211 16:08:51.490] Labels:       app=guestbook
I0211 16:08:51.490]               tier=frontend
I0211 16:08:51.490] Annotations:  <none>
I0211 16:08:51.490] Replicas:     3 current / 3 desired
I0211 16:08:51.491] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:08:51.491] Pod Template:
I0211 16:08:51.491]   Labels:  app=guestbook
I0211 16:08:51.491]            tier=frontend
I0211 16:08:51.492]   Containers:
I0211 16:08:51.492]    php-redis:
I0211 16:08:51.492]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0211 16:08:51.619] Namespace:    namespace-1549901330-1061
I0211 16:08:51.619] Selector:     app=guestbook,tier=frontend
I0211 16:08:51.619] Labels:       app=guestbook
I0211 16:08:51.619]               tier=frontend
I0211 16:08:51.619] Annotations:  <none>
I0211 16:08:51.619] Replicas:     3 current / 3 desired
I0211 16:08:51.620] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:08:51.620] Pod Template:
I0211 16:08:51.620]   Labels:  app=guestbook
I0211 16:08:51.620]            tier=frontend
I0211 16:08:51.620]   Containers:
I0211 16:08:51.620]    php-redis:
I0211 16:08:51.620]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0211 16:08:51.792] Namespace:    namespace-1549901330-1061
I0211 16:08:51.793] Selector:     app=guestbook,tier=frontend
I0211 16:08:51.793] Labels:       app=guestbook
I0211 16:08:51.793]               tier=frontend
I0211 16:08:51.793] Annotations:  <none>
I0211 16:08:51.793] Replicas:     3 current / 3 desired
I0211 16:08:51.793] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:08:51.793] Pod Template:
I0211 16:08:51.793]   Labels:  app=guestbook
I0211 16:08:51.794]            tier=frontend
I0211 16:08:51.794]   Containers:
I0211 16:08:51.794]    php-redis:
I0211 16:08:51.794]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 16:08:51.917] Namespace:    namespace-1549901330-1061
I0211 16:08:51.918] Selector:     app=guestbook,tier=frontend
I0211 16:08:51.918] Labels:       app=guestbook
I0211 16:08:51.918]               tier=frontend
I0211 16:08:51.918] Annotations:  <none>
I0211 16:08:51.918] Replicas:     3 current / 3 desired
I0211 16:08:51.918] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:08:51.918] Pod Template:
I0211 16:08:51.918]   Labels:  app=guestbook
I0211 16:08:51.918]            tier=frontend
I0211 16:08:51.919]   Containers:
I0211 16:08:51.919]    php-redis:
I0211 16:08:51.919]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0211 16:08:52.040] Namespace:    namespace-1549901330-1061
I0211 16:08:52.040] Selector:     app=guestbook,tier=frontend
I0211 16:08:52.040] Labels:       app=guestbook
I0211 16:08:52.040]               tier=frontend
I0211 16:08:52.040] Annotations:  <none>
I0211 16:08:52.040] Replicas:     3 current / 3 desired
I0211 16:08:52.040] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:08:52.040] Pod Template:
I0211 16:08:52.041]   Labels:  app=guestbook
I0211 16:08:52.041]            tier=frontend
I0211 16:08:52.041]   Containers:
I0211 16:08:52.041]    php-redis:
I0211 16:08:52.041]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0211 16:08:52.171] Namespace:    namespace-1549901330-1061
I0211 16:08:52.171] Selector:     app=guestbook,tier=frontend
I0211 16:08:52.171] Labels:       app=guestbook
I0211 16:08:52.172]               tier=frontend
I0211 16:08:52.172] Annotations:  <none>
I0211 16:08:52.172] Replicas:     3 current / 3 desired
I0211 16:08:52.172] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:08:52.172] Pod Template:
I0211 16:08:52.172]   Labels:  app=guestbook
I0211 16:08:52.172]            tier=frontend
I0211 16:08:52.173]   Containers:
I0211 16:08:52.173]    php-redis:
I0211 16:08:52.173]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
W0211 16:08:52.492] I0211 16:08:52.397617   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549901330-1061", Name:"frontend", UID:"522e9c52-2e17-11e9-9664-0242ac110002", APIVersion:"v1", ResourceVersion:"1416", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-m872c
I0211 16:08:52.593] core.sh:1045: Successful get rc frontend {{.spec.replicas}}: 2
I0211 16:08:52.614] core.sh:1049: Successful get rc frontend {{.spec.replicas}}: 2
I0211 16:08:52.820] core.sh:1053: Successful get rc frontend {{.spec.replicas}}: 2
I0211 16:08:52.923] core.sh:1057: Successful get rc frontend {{.spec.replicas}}: 2
I0211 16:08:53.033] replicationcontroller/frontend scaled
W0211 16:08:53.134] error: Expected replicas to be 3, was 2
W0211 16:08:53.135] I0211 16:08:53.038274   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549901330-1061", Name:"frontend", UID:"522e9c52-2e17-11e9-9664-0242ac110002", APIVersion:"v1", ResourceVersion:"1422", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ll7vw
I0211 16:08:53.235] core.sh:1061: Successful get rc frontend {{.spec.replicas}}: 3
I0211 16:08:53.274] core.sh:1065: Successful get rc frontend {{.spec.replicas}}: 3
I0211 16:08:53.373] replicationcontroller/frontend scaled
W0211 16:08:53.474] I0211 16:08:53.379248   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1549901330-1061", Name:"frontend", UID:"522e9c52-2e17-11e9-9664-0242ac110002", APIVersion:"v1", ResourceVersion:"1427", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-ll7vw
I0211 16:08:53.575] core.sh:1069: Successful get rc frontend {{.spec.replicas}}: 2
... skipping 41 lines ...
I0211 16:08:55.755] service "expose-test-deployment" deleted
I0211 16:08:55.865] Successful
I0211 16:08:55.866] message:service/expose-test-deployment exposed
I0211 16:08:55.866] has:service/expose-test-deployment exposed
I0211 16:08:55.958] service "expose-test-deployment" deleted
I0211 16:08:56.062] Successful
I0211 16:08:56.062] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0211 16:08:56.062] See 'kubectl expose -h' for help and examples
I0211 16:08:56.063] has:invalid deployment: no selectors
I0211 16:08:56.157] Successful
I0211 16:08:56.157] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0211 16:08:56.158] See 'kubectl expose -h' for help and examples
I0211 16:08:56.158] has:invalid deployment: no selectors
I0211 16:08:56.323] deployment.apps/nginx-deployment created
W0211 16:08:56.424] I0211 16:08:56.326326   57631 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549901330-1061", Name:"nginx-deployment", UID:"55630333-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1545", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-64bb598779 to 3
W0211 16:08:56.424] I0211 16:08:56.329935   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901330-1061", Name:"nginx-deployment-64bb598779", UID:"5563a3f4-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1546", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-7wlzt
W0211 16:08:56.424] I0211 16:08:56.333061   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901330-1061", Name:"nginx-deployment-64bb598779", UID:"5563a3f4-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1546", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-g7kzk
... skipping 23 lines ...
I0211 16:08:58.464] service "frontend" deleted
I0211 16:08:58.472] service "frontend-2" deleted
I0211 16:08:58.480] service "frontend-3" deleted
I0211 16:08:58.489] service "frontend-4" deleted
I0211 16:08:58.498] service "frontend-5" deleted
I0211 16:08:58.610] Successful
I0211 16:08:58.610] message:error: cannot expose a Node
I0211 16:08:58.610] has:cannot expose
I0211 16:08:58.709] Successful
I0211 16:08:58.710] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0211 16:08:58.710] has:metadata.name: Invalid value
I0211 16:08:58.815] Successful
I0211 16:08:58.816] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
I0211 16:09:00.975] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0211 16:09:01.080] core.sh:1233: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0211 16:09:01.168] horizontalpodautoscaler.autoscaling "frontend" deleted
I0211 16:09:01.270] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0211 16:09:01.378] core.sh:1237: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0211 16:09:01.465] horizontalpodautoscaler.autoscaling "frontend" deleted
W0211 16:09:01.566] Error: required flag(s) "max" not set
W0211 16:09:01.566] 
W0211 16:09:01.566] 
W0211 16:09:01.566] Examples:
W0211 16:09:01.567]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0211 16:09:01.567]   kubectl autoscale deployment foo --min=2 --max=10
W0211 16:09:01.567]   
... skipping 54 lines ...
I0211 16:09:01.843]           limits:
I0211 16:09:01.843]             cpu: 300m
I0211 16:09:01.843]           requests:
I0211 16:09:01.844]             cpu: 300m
I0211 16:09:01.844]       terminationGracePeriodSeconds: 0
I0211 16:09:01.844] status: {}
W0211 16:09:01.944] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0211 16:09:02.105] deployment.apps/nginx-deployment-resources created
W0211 16:09:02.207] I0211 16:09:02.110563   57631 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549901330-1061", Name:"nginx-deployment-resources", UID:"58d5784d-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1685", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-695c766d58 to 3
W0211 16:09:02.207] I0211 16:09:02.114887   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901330-1061", Name:"nginx-deployment-resources-695c766d58", UID:"58d63940-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1686", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-8b456
W0211 16:09:02.207] I0211 16:09:02.118472   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901330-1061", Name:"nginx-deployment-resources-695c766d58", UID:"58d63940-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1686", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-dg9w2
W0211 16:09:02.208] I0211 16:09:02.120761   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901330-1061", Name:"nginx-deployment-resources-695c766d58", UID:"58d63940-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1686", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-9dvzl
I0211 16:09:02.308] core.sh:1252: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
... skipping 2 lines ...
I0211 16:09:02.548] deployment.extensions/nginx-deployment-resources resource requirements updated
W0211 16:09:02.649] I0211 16:09:02.552876   57631 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549901330-1061", Name:"nginx-deployment-resources", UID:"58d5784d-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1699", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5b7fc6dd8b to 1
W0211 16:09:02.649] I0211 16:09:02.557393   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901330-1061", Name:"nginx-deployment-resources-5b7fc6dd8b", UID:"5919beb5-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1700", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5b7fc6dd8b-nx69k
I0211 16:09:02.752] core.sh:1257: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
I0211 16:09:02.781] core.sh:1258: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
I0211 16:09:02.979] deployment.extensions/nginx-deployment-resources resource requirements updated
W0211 16:09:03.080] error: unable to find container named redis
W0211 16:09:03.080] I0211 16:09:02.990945   57631 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549901330-1061", Name:"nginx-deployment-resources", UID:"58d5784d-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1709", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-5b7fc6dd8b to 0
W0211 16:09:03.081] I0211 16:09:02.997264   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901330-1061", Name:"nginx-deployment-resources-5b7fc6dd8b", UID:"5919beb5-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1713", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-5b7fc6dd8b-nx69k
W0211 16:09:03.081] I0211 16:09:03.000089   57631 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549901330-1061", Name:"nginx-deployment-resources", UID:"58d5784d-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1711", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6bc4567bf6 to 1
W0211 16:09:03.081] I0211 16:09:03.007760   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901330-1061", Name:"nginx-deployment-resources-6bc4567bf6", UID:"595b54a7-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1717", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6bc4567bf6-xkxbq
I0211 16:09:03.182] core.sh:1263: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0211 16:09:03.214] core.sh:1264: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
... skipping 79 lines ...
I0211 16:09:03.748]     status: "True"
I0211 16:09:03.749]     type: Progressing
I0211 16:09:03.749]   observedGeneration: 4
I0211 16:09:03.749]   replicas: 4
I0211 16:09:03.749]   unavailableReplicas: 4
I0211 16:09:03.749]   updatedReplicas: 1
W0211 16:09:03.849] error: you must specify resources by --filename when --local is set.
W0211 16:09:03.850] Example resource specifications include:
W0211 16:09:03.850]    '-f rsrc.yaml'
W0211 16:09:03.850]    '--filename=rsrc.json'
I0211 16:09:03.950] core.sh:1273: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0211 16:09:04.019] core.sh:1274: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0211 16:09:04.127] core.sh:1275: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0211 16:09:05.817]                 pod-template-hash=7875bf5c8b
I0211 16:09:05.817] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0211 16:09:05.818]                 deployment.kubernetes.io/max-replicas: 2
I0211 16:09:05.818]                 deployment.kubernetes.io/revision: 1
I0211 16:09:05.818] Controlled By:  Deployment/test-nginx-apps
I0211 16:09:05.818] Replicas:       1 current / 1 desired
I0211 16:09:05.818] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:05.818] Pod Template:
I0211 16:09:05.819]   Labels:  app=test-nginx-apps
I0211 16:09:05.819]            pod-template-hash=7875bf5c8b
I0211 16:09:05.819]   Containers:
I0211 16:09:05.819]    nginx:
I0211 16:09:05.819]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
I0211 16:09:10.245]     Image:	k8s.gcr.io/nginx:test-cmd
I0211 16:09:10.351] apps.sh:296: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0211 16:09:10.469] deployment.extensions/nginx rolled back
I0211 16:09:11.584] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 16:09:11.807] apps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 16:09:11.921] deployment.extensions/nginx rolled back
W0211 16:09:12.022] error: unable to find specified revision 1000000 in history
I0211 16:09:13.037] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0211 16:09:13.150] deployment.extensions/nginx paused
W0211 16:09:13.283] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0211 16:09:13.403] deployment.extensions/nginx resumed
I0211 16:09:13.540] deployment.extensions/nginx rolled back
I0211 16:09:13.766]     deployment.kubernetes.io/revision-history: 1,3
W0211 16:09:13.964] error: desired revision (3) is different from the running revision (5)
I0211 16:09:14.130] deployment.apps/nginx2 created
I0211 16:09:14.228] deployment.extensions "nginx2" deleted
I0211 16:09:14.320] deployment.extensions "nginx" deleted
W0211 16:09:14.421] I0211 16:09:14.133821   57631 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549901344-24100", Name:"nginx2", UID:"60003f7c-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1935", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-78cb9c866 to 3
W0211 16:09:14.421] I0211 16:09:14.137299   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901344-24100", Name:"nginx2-78cb9c866", UID:"6000f2be-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1936", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-78cb9c866-2hxwh
W0211 16:09:14.422] I0211 16:09:14.141164   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901344-24100", Name:"nginx2-78cb9c866", UID:"6000f2be-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1936", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx2-78cb9c866-5pc88
... skipping 10 lines ...
I0211 16:09:15.020] deployment.extensions/nginx-deployment image updated
W0211 16:09:15.121] I0211 16:09:15.025363   57631 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1549901344-24100", Name:"nginx-deployment", UID:"60480987-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1982", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5bfd55c857 to 1
W0211 16:09:15.122] I0211 16:09:15.029092   57631 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1549901344-24100", Name:"nginx-deployment-5bfd55c857", UID:"6088cdbc-2e17-11e9-9664-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1983", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5bfd55c857-s747n
I0211 16:09:15.223] apps.sh:337: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0211 16:09:15.235] apps.sh:338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0211 16:09:15.447] deployment.extensions/nginx-deployment image updated
W0211 16:09:15.547] error: unable to find container named "redis"
I0211 16:09:15.648] apps.sh:343: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0211 16:09:15.664] apps.sh:344: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0211 16:09:15.762] deployment.apps/nginx-deployment image updated
I0211 16:09:15.870] apps.sh:347: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0211 16:09:15.973] apps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0211 16:09:16.159] apps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
... skipping 95 lines ...
I0211 16:09:21.666] Namespace:    namespace-1549901359-13337
I0211 16:09:21.666] Selector:     app=guestbook,tier=frontend
I0211 16:09:21.666] Labels:       app=guestbook
I0211 16:09:21.667]               tier=frontend
I0211 16:09:21.667] Annotations:  <none>
I0211 16:09:21.667] Replicas:     3 current / 3 desired
I0211 16:09:21.667] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:21.667] Pod Template:
I0211 16:09:21.667]   Labels:  app=guestbook
I0211 16:09:21.668]            tier=frontend
I0211 16:09:21.668]   Containers:
I0211 16:09:21.668]    php-redis:
I0211 16:09:21.668]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0211 16:09:21.800] Namespace:    namespace-1549901359-13337
I0211 16:09:21.800] Selector:     app=guestbook,tier=frontend
I0211 16:09:21.800] Labels:       app=guestbook
I0211 16:09:21.800]               tier=frontend
I0211 16:09:21.801] Annotations:  <none>
I0211 16:09:21.801] Replicas:     3 current / 3 desired
I0211 16:09:21.801] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:21.801] Pod Template:
I0211 16:09:21.801]   Labels:  app=guestbook
I0211 16:09:21.801]            tier=frontend
I0211 16:09:21.801]   Containers:
I0211 16:09:21.801]    php-redis:
I0211 16:09:21.801]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0211 16:09:21.928] Namespace:    namespace-1549901359-13337
I0211 16:09:21.928] Selector:     app=guestbook,tier=frontend
I0211 16:09:21.928] Labels:       app=guestbook
I0211 16:09:21.929]               tier=frontend
I0211 16:09:21.929] Annotations:  <none>
I0211 16:09:21.929] Replicas:     3 current / 3 desired
I0211 16:09:21.929] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:21.929] Pod Template:
I0211 16:09:21.929]   Labels:  app=guestbook
I0211 16:09:21.929]            tier=frontend
I0211 16:09:21.930]   Containers:
I0211 16:09:21.930]    php-redis:
I0211 16:09:21.930]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0211 16:09:22.061] Namespace:    namespace-1549901359-13337
I0211 16:09:22.061] Selector:     app=guestbook,tier=frontend
I0211 16:09:22.062] Labels:       app=guestbook
I0211 16:09:22.062]               tier=frontend
I0211 16:09:22.062] Annotations:  <none>
I0211 16:09:22.062] Replicas:     3 current / 3 desired
I0211 16:09:22.062] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:22.062] Pod Template:
I0211 16:09:22.063]   Labels:  app=guestbook
I0211 16:09:22.063]            tier=frontend
I0211 16:09:22.063]   Containers:
I0211 16:09:22.063]    php-redis:
I0211 16:09:22.063]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0211 16:09:22.222] Namespace:    namespace-1549901359-13337
I0211 16:09:22.222] Selector:     app=guestbook,tier=frontend
I0211 16:09:22.222] Labels:       app=guestbook
I0211 16:09:22.222]               tier=frontend
I0211 16:09:22.222] Annotations:  <none>
I0211 16:09:22.223] Replicas:     3 current / 3 desired
I0211 16:09:22.223] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:22.223] Pod Template:
I0211 16:09:22.223]   Labels:  app=guestbook
I0211 16:09:22.223]            tier=frontend
I0211 16:09:22.223]   Containers:
I0211 16:09:22.223]    php-redis:
I0211 16:09:22.223]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0211 16:09:22.350] Namespace:    namespace-1549901359-13337
I0211 16:09:22.350] Selector:     app=guestbook,tier=frontend
I0211 16:09:22.350] Labels:       app=guestbook
I0211 16:09:22.350]               tier=frontend
I0211 16:09:22.350] Annotations:  <none>
I0211 16:09:22.350] Replicas:     3 current / 3 desired
I0211 16:09:22.351] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:22.351] Pod Template:
I0211 16:09:22.351]   Labels:  app=guestbook
I0211 16:09:22.351]            tier=frontend
I0211 16:09:22.351]   Containers:
I0211 16:09:22.351]    php-redis:
I0211 16:09:22.351]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0211 16:09:22.478] Namespace:    namespace-1549901359-13337
I0211 16:09:22.478] Selector:     app=guestbook,tier=frontend
I0211 16:09:22.478] Labels:       app=guestbook
I0211 16:09:22.478]               tier=frontend
I0211 16:09:22.478] Annotations:  <none>
I0211 16:09:22.478] Replicas:     3 current / 3 desired
I0211 16:09:22.478] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:22.478] Pod Template:
I0211 16:09:22.479]   Labels:  app=guestbook
I0211 16:09:22.479]            tier=frontend
I0211 16:09:22.479]   Containers:
I0211 16:09:22.479]    php-redis:
I0211 16:09:22.479]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0211 16:09:22.609] Namespace:    namespace-1549901359-13337
I0211 16:09:22.609] Selector:     app=guestbook,tier=frontend
I0211 16:09:22.609] Labels:       app=guestbook
I0211 16:09:22.609]               tier=frontend
I0211 16:09:22.609] Annotations:  <none>
I0211 16:09:22.610] Replicas:     3 current / 3 desired
I0211 16:09:22.610] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:22.610] Pod Template:
I0211 16:09:22.610]   Labels:  app=guestbook
I0211 16:09:22.610]            tier=frontend
I0211 16:09:22.610]   Containers:
I0211 16:09:22.610]    php-redis:
I0211 16:09:22.610]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0211 16:09:28.280] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0211 16:09:28.384] apps.sh:643: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0211 16:09:28.470] horizontalpodautoscaler.autoscaling "frontend" deleted
I0211 16:09:28.574] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0211 16:09:28.679] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0211 16:09:28.767] horizontalpodautoscaler.autoscaling "frontend" deleted
W0211 16:09:28.868] Error: required flag(s) "max" not set
W0211 16:09:28.868] 
W0211 16:09:28.868] 
W0211 16:09:28.869] Examples:
W0211 16:09:28.869]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0211 16:09:28.869]   kubectl autoscale deployment foo --min=2 --max=10
W0211 16:09:28.869]   
... skipping 88 lines ...
I0211 16:09:32.264] apps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0211 16:09:32.368] apps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0211 16:09:32.494] statefulset.apps/nginx rolled back
I0211 16:09:32.611] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0211 16:09:32.723] apps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 16:09:32.842] Successful
I0211 16:09:32.843] message:error: unable to find specified revision 1000000 in history
I0211 16:09:32.843] has:unable to find specified revision
I0211 16:09:32.946] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0211 16:09:33.051] apps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0211 16:09:33.169] statefulset.apps/nginx rolled back
I0211 16:09:33.284] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0211 16:09:33.393] apps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0211 16:09:35.486] Name:         mock
I0211 16:09:35.486] Namespace:    namespace-1549901374-16858
I0211 16:09:35.486] Selector:     app=mock
I0211 16:09:35.486] Labels:       app=mock
I0211 16:09:35.486] Annotations:  <none>
I0211 16:09:35.487] Replicas:     1 current / 1 desired
I0211 16:09:35.487] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:35.487] Pod Template:
I0211 16:09:35.487]   Labels:  app=mock
I0211 16:09:35.487]   Containers:
I0211 16:09:35.487]    mock-container:
I0211 16:09:35.487]     Image:        k8s.gcr.io/pause:2.0
I0211 16:09:35.487]     Port:         9949/TCP
... skipping 56 lines ...
I0211 16:09:37.968] Name:         mock
I0211 16:09:37.968] Namespace:    namespace-1549901374-16858
I0211 16:09:37.968] Selector:     app=mock
I0211 16:09:37.969] Labels:       app=mock
I0211 16:09:37.969] Annotations:  <none>
I0211 16:09:37.969] Replicas:     1 current / 1 desired
I0211 16:09:37.969] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:37.969] Pod Template:
I0211 16:09:37.969]   Labels:  app=mock
I0211 16:09:37.969]   Containers:
I0211 16:09:37.969]    mock-container:
I0211 16:09:37.969]     Image:        k8s.gcr.io/pause:2.0
I0211 16:09:37.970]     Port:         9949/TCP
... skipping 56 lines ...
I0211 16:09:40.432] Name:         mock
I0211 16:09:40.432] Namespace:    namespace-1549901374-16858
I0211 16:09:40.432] Selector:     app=mock
I0211 16:09:40.432] Labels:       app=mock
I0211 16:09:40.432] Annotations:  <none>
I0211 16:09:40.432] Replicas:     1 current / 1 desired
I0211 16:09:40.432] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:40.433] Pod Template:
I0211 16:09:40.433]   Labels:  app=mock
I0211 16:09:40.433]   Containers:
I0211 16:09:40.433]    mock-container:
I0211 16:09:40.433]     Image:        k8s.gcr.io/pause:2.0
I0211 16:09:40.433]     Port:         9949/TCP
... skipping 42 lines ...
I0211 16:09:42.773] Namespace:    namespace-1549901374-16858
I0211 16:09:42.773] Selector:     app=mock
I0211 16:09:42.773] Labels:       app=mock
I0211 16:09:42.774]               status=replaced
I0211 16:09:42.774] Annotations:  <none>
I0211 16:09:42.774] Replicas:     1 current / 1 desired
I0211 16:09:42.774] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:42.774] Pod Template:
I0211 16:09:42.774]   Labels:  app=mock
I0211 16:09:42.774]   Containers:
I0211 16:09:42.775]    mock-container:
I0211 16:09:42.775]     Image:        k8s.gcr.io/pause:2.0
I0211 16:09:42.775]     Port:         9949/TCP
... skipping 11 lines ...
I0211 16:09:42.784] Namespace:    namespace-1549901374-16858
I0211 16:09:42.784] Selector:     app=mock2
I0211 16:09:42.784] Labels:       app=mock2
I0211 16:09:42.784]               status=replaced
I0211 16:09:42.784] Annotations:  <none>
I0211 16:09:42.785] Replicas:     1 current / 1 desired
I0211 16:09:42.785] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0211 16:09:42.785] Pod Template:
I0211 16:09:42.785]   Labels:  app=mock2
I0211 16:09:42.785]   Containers:
I0211 16:09:42.785]    mock-container:
I0211 16:09:42.785]     Image:        k8s.gcr.io/pause:2.0
I0211 16:09:42.786]     Port:         9949/TCP
... skipping 108 lines ...
I0211 16:09:48.442] +++ [0211 16:09:48] Testing persistent volumes
I0211 16:09:48.543] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0211 16:09:48.718] persistentvolume/pv0001 created
I0211 16:09:48.831] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0211 16:09:48.919] persistentvolume "pv0001" deleted
I0211 16:09:49.119] persistentvolume/pv0002 created
W0211 16:09:49.220] E0211 16:09:49.122076   57631 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I0211 16:09:49.320] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0211 16:09:49.321] persistentvolume "pv0002" deleted
I0211 16:09:49.490] persistentvolume/pv0003 created
I0211 16:09:49.600] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0211 16:09:49.688] persistentvolume "pv0003" deleted
I0211 16:09:49.799] storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 469 lines ...
I0211 16:09:55.004] yes
I0211 16:09:55.004] has:the server doesn't have a resource type
I0211 16:09:55.090] Successful
I0211 16:09:55.090] message:yes
I0211 16:09:55.091] has:yes
I0211 16:09:55.171] Successful
I0211 16:09:55.171] message:error: --subresource can not be used with NonResourceURL
I0211 16:09:55.171] has:subresource can not be used with NonResourceURL
I0211 16:09:55.259] Successful
I0211 16:09:55.354] Successful
I0211 16:09:55.355] message:yes
I0211 16:09:55.355] 0
I0211 16:09:55.355] has:0
... skipping 6 lines ...
I0211 16:09:55.567] role.rbac.authorization.k8s.io/testing-R reconciled
I0211 16:09:55.674] legacy-script.sh:745: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0211 16:09:55.776] legacy-script.sh:746: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0211 16:09:55.878] legacy-script.sh:747: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0211 16:09:55.984] legacy-script.sh:748: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0211 16:09:56.072] Successful
I0211 16:09:56.072] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0211 16:09:56.072] has:only rbac.authorization.k8s.io/v1 is supported
I0211 16:09:56.171] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0211 16:09:56.178] role.rbac.authorization.k8s.io "testing-R" deleted
I0211 16:09:56.189] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0211 16:09:56.197] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0211 16:09:56.210] Recording: run_retrieve_multiple_tests
... skipping 1017 lines ...
I0211 16:10:25.847] message:node/127.0.0.1 already uncordoned (dry run)
I0211 16:10:25.847] has:already uncordoned
I0211 16:10:25.947] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0211 16:10:26.041] node/127.0.0.1 labeled
I0211 16:10:26.148] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0211 16:10:26.226] Successful
I0211 16:10:26.226] message:error: cannot specify both a node name and a --selector option
I0211 16:10:26.226] See 'kubectl drain -h' for help and examples
I0211 16:10:26.226] has:cannot specify both a node name
I0211 16:10:26.304] Successful
I0211 16:10:26.304] message:error: USAGE: cordon NODE [flags]
I0211 16:10:26.304] See 'kubectl cordon -h' for help and examples
I0211 16:10:26.304] has:error\: USAGE\: cordon NODE
I0211 16:10:26.390] node/127.0.0.1 already uncordoned
I0211 16:10:26.477] Successful
I0211 16:10:26.477] message:error: You must provide one or more resources by argument or filename.
I0211 16:10:26.477] Example resource specifications include:
I0211 16:10:26.478]    '-f rsrc.yaml'
I0211 16:10:26.478]    '--filename=rsrc.json'
I0211 16:10:26.478]    '<resource> <name>'
I0211 16:10:26.478]    '<resource>'
I0211 16:10:26.478] has:must provide one or more resources
... skipping 15 lines ...
I0211 16:10:26.978] Successful
I0211 16:10:26.978] message:The following compatible plugins are available:
I0211 16:10:26.978] 
I0211 16:10:26.978] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0211 16:10:26.978]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0211 16:10:26.979] 
I0211 16:10:26.979] error: one plugin warning was found
I0211 16:10:26.979] has:kubectl-version overwrites existing command: "kubectl version"
I0211 16:10:27.062] Successful
I0211 16:10:27.062] message:The following compatible plugins are available:
I0211 16:10:27.062] 
I0211 16:10:27.062] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0211 16:10:27.063] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0211 16:10:27.063]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0211 16:10:27.063] 
I0211 16:10:27.063] error: one plugin warning was found
I0211 16:10:27.063] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0211 16:10:27.142] Successful
I0211 16:10:27.142] message:The following compatible plugins are available:
I0211 16:10:27.142] 
I0211 16:10:27.142] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0211 16:10:27.142] has:plugins are available
I0211 16:10:27.228] Successful
I0211 16:10:27.228] message:
I0211 16:10:27.228] error: unable to find any kubectl plugins in your PATH
I0211 16:10:27.228] has:unable to find any kubectl plugins in your PATH
I0211 16:10:27.310] Successful
I0211 16:10:27.310] message:I am plugin foo
I0211 16:10:27.310] has:plugin foo
I0211 16:10:27.391] Successful
I0211 16:10:27.392] message:Client Version: version.Info{Major:"1", Minor:"14+", GitVersion:"v1.14.0-alpha.2.523+78d08c6ea0d156", GitCommit:"78d08c6ea0d156e023b8a4ca8b89f973784d94d1", GitTreeState:"clean", BuildDate:"2019-02-11T16:03:12Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0211 16:10:27.487] 
I0211 16:10:27.490] +++ Running case: test-cmd.run_impersonation_tests 
I0211 16:10:27.493] +++ working dir: /go/src/k8s.io/kubernetes
I0211 16:10:27.496] +++ command: run_impersonation_tests
I0211 16:10:27.509] +++ [0211 16:10:27] Testing impersonation
I0211 16:10:27.588] Successful
I0211 16:10:27.589] message:error: requesting groups or user-extra for  without impersonating a user
I0211 16:10:27.589] has:without impersonating a user
I0211 16:10:27.769] certificatesigningrequest.certificates.k8s.io/foo created
I0211 16:10:27.882] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0211 16:10:27.991] authorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0211 16:10:28.085] certificatesigningrequest.certificates.k8s.io "foo" deleted
I0211 16:10:28.273] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 65 lines ...
W0211 16:10:31.562] I0211 16:10:31.553919   54270 available_controller.go:328] Shutting down AvailableConditionController
W0211 16:10:31.562] I0211 16:10:31.556283   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.562] I0211 16:10:31.556292   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.562] I0211 16:10:31.556294   54270 secure_serving.go:160] Stopped listening on 127.0.0.1:6443
W0211 16:10:31.563] I0211 16:10:31.556311   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.563] I0211 16:10:31.556339   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.563] W0211 16:10:31.556686   54270 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 16:10:31.563] W0211 16:10:31.556755   54270 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 16:10:31.563] W0211 16:10:31.556786   54270 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 16:10:31.564] I0211 16:10:31.556860   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.564] I0211 16:10:31.556874   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.564] W0211 16:10:31.556734   54270 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 16:10:31.564] W0211 16:10:31.556862   54270 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 16:10:31.564] W0211 16:10:31.556902   54270 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 16:10:31.565] W0211 16:10:31.556942   54270 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 16:10:31.565] W0211 16:10:31.556980   54270 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 16:10:31.565] W0211 16:10:31.557027   54270 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 16:10:31.565] W0211 16:10:31.557199   54270 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 16:10:31.566] W0211 16:10:31.557426   54270 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 16:10:31.566] I0211 16:10:31.557551   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.566] I0211 16:10:31.557567   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.566] W0211 16:10:31.557208   54270 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0211 16:10:31.566] I0211 16:10:31.557831   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.567] I0211 16:10:31.557874   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.567] I0211 16:10:31.557715   54270 picker_wrapper.go:218] blockingPicker: the picked transport is not ready, loop back to repick
W0211 16:10:31.567] I0211 16:10:31.558157   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.567] I0211 16:10:31.558167   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.567] I0211 16:10:31.558197   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 54 lines ...
W0211 16:10:31.575] I0211 16:10:31.559347   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.575] I0211 16:10:31.559356   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.576] I0211 16:10:31.559349   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.576] I0211 16:10:31.559379   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.576] I0211 16:10:31.559508   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.576] I0211 16:10:31.559542   54270 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0211 16:10:31.576] E0211 16:10:31.559557   54270 controller.go:172] rpc error: code = Unavailable desc = transport is closing
W0211 16:10:31.634] + make test-integration
I0211 16:10:31.734] No resources found
I0211 16:10:31.735] No resources found
I0211 16:10:31.735] +++ [0211 16:10:31] TESTS PASSED
I0211 16:10:31.735] junit report dir: /workspace/artifacts
I0211 16:10:31.735] +++ [0211 16:10:31] Clean up complete
... skipping 5 lines ...
I0211 16:10:36.890] +++ [0211 16:10:36] On try 2, etcd: : http://127.0.0.1:2379
I0211 16:10:36.902] {"action":"set","node":{"key":"/_test","value":"","modifiedIndex":4,"createdIndex":4}}
I0211 16:10:36.908] +++ [0211 16:10:36] Running integration test cases
I0211 16:10:42.691] Running tests for APIVersion: v1,admissionregistration.k8s.io/v1beta1,admission.k8s.io/v1beta1,apps/v1,apps/v1beta1,apps/v1beta2,auditregistration.k8s.io/v1alpha1,authentication.k8s.io/v1,authentication.k8s.io/v1beta1,authorization.k8s.io/v1,authorization.k8s.io/v1beta1,autoscaling/v1,autoscaling/v2beta1,autoscaling/v2beta2,batch/v1,batch/v1beta1,batch/v2alpha1,certificates.k8s.io/v1beta1,coordination.k8s.io/v1beta1,coordination.k8s.io/v1,extensions/v1beta1,events.k8s.io/v1beta1,imagepolicy.k8s.io/v1alpha1,networking.k8s.io/v1,policy/v1beta1,rbac.authorization.k8s.io/v1,rbac.authorization.k8s.io/v1beta1,rbac.authorization.k8s.io/v1alpha1,scheduling.k8s.io/v1alpha1,scheduling.k8s.io/v1beta1,settings.k8s.io/v1alpha1,storage.k8s.io/v1beta1,storage.k8s.io/v1,storage.k8s.io/v1alpha1,
I0211 16:10:42.740] +++ [0211 16:10:42] Running tests without code coverage
W0211 16:11:06.259] # k8s.io/kubernetes/test/integration/apimachinery [k8s.io/kubernetes/test/integration/apimachinery.test]
W0211 16:11:06.260] test/integration/apimachinery/watch_restart_test.go:179:4: cannot use func literal (type func(*kubernetes.Clientset, *"k8s.io/kubernetes/vendor/k8s.io/api/core/v1".Secret) ("k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch".Interface, error)) as type func(*kubernetes.Clientset, *"k8s.io/kubernetes/vendor/k8s.io/api/core/v1".Secret) ("k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/watch".Interface, error, func()) in field value
I0211 16:22:33.794] FAIL	k8s.io/kubernetes/test/integration/apimachinery [build failed]
I0211 16:22:33.795] ok  	k8s.io/kubernetes/test/integration/apiserver	48.376s
I0211 16:22:33.795] ok  	k8s.io/kubernetes/test/integration/apiserver/apply	26.421s
I0211 16:22:33.795] ok  	k8s.io/kubernetes/test/integration/auth	104.719s
I0211 16:22:33.796] ok  	k8s.io/kubernetes/test/integration/client	74.317s
I0211 16:22:33.796] ok  	k8s.io/kubernetes/test/integration/configmap	6.285s
I0211 16:22:33.796] ok  	k8s.io/kubernetes/test/integration/cronjob	56.724s
... skipping 27 lines ...
I0211 16:22:33.799] ok  	k8s.io/kubernetes/test/integration/storageclasses	4.582s
I0211 16:22:33.799] ok  	k8s.io/kubernetes/test/integration/tls	10.153s
I0211 16:22:33.800] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	10.949s
I0211 16:22:33.800] ok  	k8s.io/kubernetes/test/integration/volume	94.372s
I0211 16:22:33.800] ok  	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	145.535s
I0211 16:22:47.954] +++ [0211 16:22:47] Saved JUnit XML test report to /workspace/artifacts/junit_642613dbe8fbf016c1770a7007e34bb12666c617_20190211-161042.xml
I0211 16:22:47.958] Makefile:184: recipe for target 'test' failed
I0211 16:22:47.970] +++ [0211 16:22:47] Cleaning up etcd
W0211 16:22:48.071] make[1]: *** [test] Error 1
W0211 16:22:48.071] !!! [0211 16:22:47] Call tree:
W0211 16:22:48.071] !!! [0211 16:22:47]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0211 16:22:48.273] +++ [0211 16:22:48] Integration test cleanup complete
I0211 16:22:48.274] Makefile:203: recipe for target 'test-integration' failed
W0211 16:22:48.375] make: *** [test-integration] Error 1
W0211 16:22:51.033] Traceback (most recent call last):
W0211 16:22:51.034]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0211 16:22:51.034]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0211 16:22:51.034]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0211 16:22:51.034]     check(*cmd)
W0211 16:22:51.034]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0211 16:22:51.034]     subprocess.check_call(cmd)
W0211 16:22:51.035]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0211 16:22:51.052]     raise CalledProcessError(retcode, cmd)
W0211 16:22:51.053] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20190125-cc5d6ecff3', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0211 16:22:51.063] Command failed
I0211 16:22:51.063] process 685 exited with code 1 after 26.6m
E0211 16:22:51.064] FAIL: pull-kubernetes-integration
I0211 16:22:51.064] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0211 16:22:51.631] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0211 16:22:51.690] process 127123 exited with code 0 after 0.0m
I0211 16:22:51.690] Call:  gcloud config get-value account
I0211 16:22:52.040] process 127135 exited with code 0 after 0.0m
I0211 16:22:52.040] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0211 16:22:52.041] Upload result and artifacts...
I0211 16:22:52.041] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/67350/pull-kubernetes-integration/44330
I0211 16:22:52.041] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/67350/pull-kubernetes-integration/44330/artifacts
W0211 16:22:53.258] CommandException: One or more URLs matched no objects.
E0211 16:22:53.417] Command failed
I0211 16:22:53.417] process 127147 exited with code 1 after 0.0m
W0211 16:22:53.418] Remote dir gs://kubernetes-jenkins/pr-logs/pull/67350/pull-kubernetes-integration/44330/artifacts not exist yet
I0211 16:22:53.418] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/67350/pull-kubernetes-integration/44330/artifacts
I0211 16:22:58.875] process 127289 exited with code 0 after 0.1m
W0211 16:22:58.876] metadata path /workspace/_artifacts/metadata.json does not exist
W0211 16:22:58.876] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...