This job view page is being replaced by Spyglass soon. Check out the new job view.
PRmtaufen: upload Windows startup scripts to GCS for CI
ResultFAILURE
Tests 1 failed / 554 succeeded
Started2019-02-25 21:54
Elapsed25m13s
Revision
Buildergke-prow-containerd-pool-99179761-54fj
Refs master:1eb2acca
73650:dfded520
podd2015a0d-3947-11e9-b41a-0a580a6c1321
infra-commitf70ee9e84
podd2015a0d-3947-11e9-b41a-0a580a6c1321
repok8s.io/kubernetes
repo-commitecee4ad3812b8c765a4a79441c0584b171c21a8e
repos{u'k8s.io/kubernetes': u'master:1eb2acca99fefad25a29870ffc9de29213db943b,73650:dfded52066554626b21fc99b225e019131698bd2'}

Test Failures


k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration TestValidateOnlyStatus 2.62s

go test -v k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration -run TestValidateOnlyStatus$
I0225 22:14:39.152075  114678 customresource_discovery_controller.go:214] Shutting down DiscoveryController
I0225 22:14:39.152229  114678 secure_serving.go:160] Stopped listening on 127.0.0.1:35173
I0225 22:14:39.152091  114678 establishing_controller.go:84] Shutting down EstablishingController
I0225 22:14:39.152706  114678 serving.go:312] Generated self-signed cert (/tmp/apiextensions-apiserver510431579/apiserver.crt, /tmp/apiextensions-apiserver510431579/apiserver.key)
W0225 22:14:39.664595  114678 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0225 22:14:39.666260  114678 clientconn.go:551] parsed scheme: ""
I0225 22:14:39.666285  114678 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0225 22:14:39.666328  114678 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0225 22:14:39.666385  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0225 22:14:39.666697  114678 clientconn.go:551] parsed scheme: ""
I0225 22:14:39.666715  114678 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0225 22:14:39.666720  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0225 22:14:39.666750  114678 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0225 22:14:39.666794  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0225 22:14:39.667046  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:14:39.668291  114678 mutation_detector.go:48] Mutation detector is enabled, this will result in memory leakage.
I0225 22:14:39.669322  114678 secure_serving.go:116] Serving securely on 127.0.0.1:42951
I0225 22:14:39.669454  114678 naming_controller.go:284] Starting NamingConditionController
I0225 22:14:39.669484  114678 crd_finalizer.go:242] Starting CRDFinalizer
I0225 22:14:39.669432  114678 customresource_discovery_controller.go:203] Starting DiscoveryController
I0225 22:14:39.669524  114678 establishing_controller.go:73] Starting EstablishingController
E0225 22:14:39.669922  114678 reflector.go:135] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get http://127.1.2.3:12345/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.1.2.3:12345: connect: connection refused
I0225 22:14:40.153231  114678 clientconn.go:551] parsed scheme: ""
I0225 22:14:40.153260  114678 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0225 22:14:40.153303  114678 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0225 22:14:40.153346  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0225 22:14:40.153712  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E0225 22:14:40.670579  114678 reflector.go:135] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get http://127.1.2.3:12345/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.1.2.3:12345: connect: connection refused
I0225 22:14:40.677372  114678 clientconn.go:551] parsed scheme: ""
I0225 22:14:40.677404  114678 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0225 22:14:40.677438  114678 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0225 22:14:40.677519  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0225 22:14:40.677855  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0225 22:14:40.678418  114678 clientconn.go:551] parsed scheme: ""
I0225 22:14:40.678434  114678 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0225 22:14:40.678467  114678 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0225 22:14:40.678517  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0225 22:14:40.679135  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
E0225 22:14:41.671260  114678 reflector.go:135] k8s.io/client-go/informers/factory.go:132: Failed to list *v1.Service: Get http://127.1.2.3:12345/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.1.2.3:12345: connect: connection refused
I0225 22:14:41.712806  114678 clientconn.go:551] parsed scheme: ""
I0225 22:14:41.712825  114678 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0225 22:14:41.712860  114678 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0225 22:14:41.712914  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0225 22:14:41.713695  114678 clientconn.go:551] parsed scheme: ""
I0225 22:14:41.713709  114678 clientconn.go:557] scheme "" not registered, fallback to default scheme
I0225 22:14:41.713732  114678 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0225 22:14:41.713787  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0225 22:14:41.713988  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0225 22:14:41.715018  114678 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
testserver.go:141: runtime-config=map[api/all:true]
testserver.go:142: Starting apiextensions-apiserver on port 42951...
testserver.go:160: Waiting for /healthz to be ok...
subresources_test.go:537: unexpected error: Operation cannot be fulfilled on noxus.mygroup.example.com "foo": StorageError: invalid object, Code: 4, Key: /490cece7-4eb6-4a4e-9150-d939fd77412d/mygroup.example.com/noxus/not-the-default/foo, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: bfad850f-394a-11e9-a407-0242ac110002, UID in object meta: bfab9d83-394a-11e9-a407-0242ac110002
panic: runtime error: invalid memory address or nil pointer dereference
/usr/local/go/src/testing/testing.go:792 +0x387
/usr/local/go/src/runtime/panic.go:513 +0x1b9
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration/subresources_test.go:541 +0xa87
/usr/local/go/src/testing/testing.go:827 +0xbf
/usr/local/go/src/testing/testing.go:878 +0x35c
				from junit_34ec65c8459586587b0004cdabcb6aa30b905266_20190225-220708.xml

Filter through log files | View test history on testgrid


Show 554 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

... skipping 307 lines ...
W0225 22:01:53.454] I0225 22:01:53.454198   44034 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
W0225 22:01:53.455] I0225 22:01:53.454294   44034 server.go:561] external host was not specified, using 172.17.0.2
W0225 22:01:53.455] W0225 22:01:53.454307   44034 authentication.go:415] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0225 22:01:53.455] I0225 22:01:53.454531   44034 server.go:147] Version: v1.15.0-alpha.0.348+ecee4ad3812b8c
W0225 22:01:54.078] I0225 22:01:54.077579   44034 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0225 22:01:54.078] I0225 22:01:54.077628   44034 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0225 22:01:54.079] E0225 22:01:54.078074   44034 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:01:54.079] E0225 22:01:54.078106   44034 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:01:54.079] E0225 22:01:54.078202   44034 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:01:54.079] E0225 22:01:54.078232   44034 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:01:54.079] E0225 22:01:54.078248   44034 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:01:54.080] E0225 22:01:54.078274   44034 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:01:54.080] I0225 22:01:54.078308   44034 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0225 22:01:54.080] I0225 22:01:54.078312   44034 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0225 22:01:54.080] I0225 22:01:54.079692   44034 clientconn.go:551] parsed scheme: ""
W0225 22:01:54.080] I0225 22:01:54.079732   44034 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0225 22:01:54.081] I0225 22:01:54.079784   44034 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0225 22:01:54.081] I0225 22:01:54.079875   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 345 lines ...
W0225 22:01:54.634] W0225 22:01:54.633322   44034 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0225 22:01:55.084] I0225 22:01:55.083455   44034 clientconn.go:551] parsed scheme: ""
W0225 22:01:55.084] I0225 22:01:55.083536   44034 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0225 22:01:55.084] I0225 22:01:55.083591   44034 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0225 22:01:55.084] I0225 22:01:55.083731   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:01:55.087] I0225 22:01:55.087411   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:01:55.398] E0225 22:01:55.397979   44034 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:01:55.399] E0225 22:01:55.398040   44034 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:01:55.399] E0225 22:01:55.398080   44034 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:01:55.399] E0225 22:01:55.398168   44034 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:01:55.399] E0225 22:01:55.398197   44034 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:01:55.399] E0225 22:01:55.398212   44034 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0225 22:01:55.399] I0225 22:01:55.398256   44034 plugins.go:158] Loaded 4 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,TaintNodesByCondition,Priority.
W0225 22:01:55.400] I0225 22:01:55.398277   44034 plugins.go:161] Loaded 4 validating admission controller(s) successfully in the following order: LimitRanger,Priority,PersistentVolumeClaimResize,ResourceQuota.
W0225 22:01:55.400] I0225 22:01:55.399694   44034 clientconn.go:551] parsed scheme: ""
W0225 22:01:55.400] I0225 22:01:55.399714   44034 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0225 22:01:55.400] I0225 22:01:55.399763   44034 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0225 22:01:55.400] I0225 22:01:55.399880   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 148 lines ...
W0225 22:02:31.113] I0225 22:02:31.112323   47447 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-controller-manager...
W0225 22:02:31.124] I0225 22:02:31.123192   47447 leaderelection.go:227] successfully acquired lease kube-system/kube-controller-manager
W0225 22:02:31.124] I0225 22:02:31.123485   47447 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"kube-controller-manager", UID:"0c2cdcee-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"151", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' e981a86b94c9_0c2c1796-3949-11e9-bf3e-0242ac110002 became leader
W0225 22:02:31.278] I0225 22:02:31.277683   47447 plugins.go:103] No cloud provider specified.
W0225 22:02:31.278] W0225 22:02:31.277747   47447 controllermanager.go:517] "serviceaccount-token" is disabled because there is no private key
W0225 22:02:31.278] I0225 22:02:31.278037   47447 node_lifecycle_controller.go:77] Sending events to api server
W0225 22:02:31.279] E0225 22:02:31.279094   47447 core.go:162] failed to start cloud node lifecycle controller: no cloud provider provided
W0225 22:02:31.280] W0225 22:02:31.279278   47447 controllermanager.go:489] Skipping "cloud-node-lifecycle"
W0225 22:02:31.282] I0225 22:02:31.282175   47447 controllermanager.go:497] Started "persistentvolume-binder"
W0225 22:02:31.283] I0225 22:02:31.282216   47447 pv_controller_base.go:271] Starting persistent volume controller
W0225 22:02:31.283] I0225 22:02:31.282244   47447 controller_utils.go:1021] Waiting for caches to sync for persistent volume controller
W0225 22:02:31.435] I0225 22:02:31.434837   47447 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.extensions
W0225 22:02:31.436] I0225 22:02:31.434913   47447 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for deployments.extensions
... skipping 15 lines ...
W0225 22:02:31.439] I0225 22:02:31.435783   47447 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for replicasets.extensions
W0225 22:02:31.439] I0225 22:02:31.435825   47447 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
W0225 22:02:31.439] I0225 22:02:31.435911   47447 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
W0225 22:02:31.440] I0225 22:02:31.435984   47447 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for events.events.k8s.io
W0225 22:02:31.440] I0225 22:02:31.436309   47447 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
W0225 22:02:31.440] I0225 22:02:31.436384   47447 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
W0225 22:02:31.440] E0225 22:02:31.436411   47447 resource_quota_controller.go:171] initial monitor sync has error: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0225 22:02:31.441] I0225 22:02:31.436445   47447 controllermanager.go:497] Started "resourcequota"
W0225 22:02:31.441] I0225 22:02:31.436510   47447 resource_quota_controller.go:276] Starting resource quota controller
W0225 22:02:31.441] I0225 22:02:31.436547   47447 controller_utils.go:1021] Waiting for caches to sync for resource quota controller
W0225 22:02:31.441] I0225 22:02:31.436607   47447 resource_quota_monitor.go:301] QuotaMonitor running
W0225 22:02:31.444] I0225 22:02:31.444260   47447 controllermanager.go:497] Started "namespace"
W0225 22:02:31.445] I0225 22:02:31.444372   47447 namespace_controller.go:186] Starting namespace controller
... skipping 11 lines ...
W0225 22:02:31.448] I0225 22:02:31.448457   47447 controller_utils.go:1021] Waiting for caches to sync for GC controller
W0225 22:02:31.449] I0225 22:02:31.448724   47447 controllermanager.go:497] Started "job"
W0225 22:02:31.449] W0225 22:02:31.448744   47447 controllermanager.go:489] Skipping "ttl-after-finished"
W0225 22:02:31.449] I0225 22:02:31.449318   47447 controllermanager.go:497] Started "deployment"
W0225 22:02:31.450] I0225 22:02:31.449863   47447 controllermanager.go:497] Started "csrapproving"
W0225 22:02:31.450] I0225 22:02:31.450336   47447 controllermanager.go:497] Started "ttl"
W0225 22:02:31.451] E0225 22:02:31.450856   47447 core.go:78] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0225 22:02:31.451] W0225 22:02:31.450876   47447 controllermanager.go:489] Skipping "service"
W0225 22:02:31.451] I0225 22:02:31.451381   47447 controllermanager.go:497] Started "pvc-protection"
W0225 22:02:31.452] I0225 22:02:31.451876   47447 job_controller.go:143] Starting job controller
W0225 22:02:31.452] I0225 22:02:31.451894   47447 controller_utils.go:1021] Waiting for caches to sync for job controller
W0225 22:02:31.452] I0225 22:02:31.451946   47447 deployment_controller.go:152] Starting deployment controller
W0225 22:02:31.452] I0225 22:02:31.451961   47447 controller_utils.go:1021] Waiting for caches to sync for deployment controller
... skipping 91 lines ...
I0225 22:02:32.368]   "gitTreeState": "clean",
I0225 22:02:32.369]   "buildDate": "2019-02-25T22:00:55Z",
I0225 22:02:32.369]   "goVersion": "go1.11.5",
I0225 22:02:32.369]   "compiler": "gc",
I0225 22:02:32.369]   "platform": "linux/amd64"
I0225 22:02:32.520] }+++ [0225 22:02:32] Testing kubectl version: check client only output matches expected output
W0225 22:02:32.621] W0225 22:02:32.379638   47447 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
W0225 22:02:32.622] I0225 22:02:32.382490   47447 controller_utils.go:1028] Caches are synced for persistent volume controller
W0225 22:02:32.622] I0225 22:02:32.417629   47447 controller_utils.go:1028] Caches are synced for ClusterRoleAggregator controller
W0225 22:02:32.622] I0225 22:02:32.417682   47447 controller_utils.go:1028] Caches are synced for attach detach controller
W0225 22:02:32.622] I0225 22:02:32.424614   47447 controller_utils.go:1028] Caches are synced for taint controller
W0225 22:02:32.622] I0225 22:02:32.424771   47447 node_lifecycle_controller.go:1121] Initializing eviction metric for zone: 
W0225 22:02:32.622] I0225 22:02:32.424862   47447 node_lifecycle_controller.go:971] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
W0225 22:02:32.623] I0225 22:02:32.425204   47447 taint_manager.go:198] Starting NoExecuteTaintManager
W0225 22:02:32.623] I0225 22:02:32.425232   47447 event.go:209] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"0c88541e-3949-11e9-a41f-0242ac110002", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
W0225 22:02:32.623] E0225 22:02:32.430847   47447 clusterroleaggregation_controller.go:180] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
W0225 22:02:32.623] E0225 22:02:32.432255   47447 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0225 22:02:32.624] E0225 22:02:32.445948   47447 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0225 22:02:32.624] I0225 22:02:32.447677   47447 controller_utils.go:1028] Caches are synced for daemon sets controller
W0225 22:02:32.624] I0225 22:02:32.452215   47447 controller_utils.go:1028] Caches are synced for TTL controller
W0225 22:02:32.624] I0225 22:02:32.519908   47447 controller_utils.go:1028] Caches are synced for disruption controller
W0225 22:02:32.624] I0225 22:02:32.519953   47447 disruption.go:294] Sending events to api server.
W0225 22:02:32.625] I0225 22:02:32.524649   47447 controller_utils.go:1028] Caches are synced for stateful set controller
W0225 22:02:32.625] I0225 22:02:32.536897   47447 controller_utils.go:1028] Caches are synced for resource quota controller
... skipping 5 lines ...
I0225 22:02:32.824] Successful: --output json has correct client info
I0225 22:02:32.831] (BSuccessful: --output json has correct server info
I0225 22:02:32.834] (B+++ [0225 22:02:32] Testing kubectl version: verify json output using additional --client flag does not contain serverVersion
I0225 22:02:32.974] Successful: --client --output json has correct client info
I0225 22:02:32.979] (BSuccessful: --client --output json has no server info
I0225 22:02:32.982] (B+++ [0225 22:02:32] Testing kubectl version: compare json output using additional --short flag
W0225 22:02:33.085] E0225 22:02:33.085226   47447 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
I0225 22:02:33.186] Successful: --short --output client json info is equal to non short result
I0225 22:02:33.186] (BSuccessful: --short --output server json info is equal to non short result
I0225 22:02:33.186] (B+++ [0225 22:02:33] Testing kubectl version: compare json output with yaml output
I0225 22:02:33.274] Successful: --output json/yaml has identical information
I0225 22:02:33.288] (B+++ exit code: 0
I0225 22:02:33.308] Recording: run_kubectl_config_set_tests
... skipping 44 lines ...
I0225 22:02:35.944] +++ working dir: /go/src/k8s.io/kubernetes
I0225 22:02:35.947] +++ command: run_RESTMapper_evaluation_tests
I0225 22:02:35.959] +++ [0225 22:02:35] Creating namespace namespace-1551132155-17378
I0225 22:02:36.039] namespace/namespace-1551132155-17378 created
I0225 22:02:36.112] Context "test" modified.
I0225 22:02:36.119] +++ [0225 22:02:36] Testing RESTMapper
I0225 22:02:36.225] +++ [0225 22:02:36] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
I0225 22:02:36.245] +++ exit code: 0
I0225 22:02:36.379] NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
I0225 22:02:36.379] bindings                                                                      true         Binding
I0225 22:02:36.380] componentstatuses                 cs                                          false        ComponentStatus
I0225 22:02:36.380] configmaps                        cm                                          true         ConfigMap
I0225 22:02:36.380] endpoints                         ep                                          true         Endpoints
... skipping 656 lines ...
I0225 22:02:56.299] (Bpoddisruptionbudget.policy/test-pdb-3 created
I0225 22:02:56.389] core.sh:251: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
I0225 22:02:56.462] (Bpoddisruptionbudget.policy/test-pdb-4 created
I0225 22:02:56.550] core.sh:255: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
I0225 22:02:56.700] (Bcore.sh:261: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:02:56.883] (Bpod/env-test-pod created
W0225 22:02:56.984] error: resource(s) were provided, but no name, label selector, or --all flag specified
W0225 22:02:56.984] error: setting 'all' parameter but found a non empty selector. 
W0225 22:02:56.984] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0225 22:02:56.984] I0225 22:02:55.976094   44034 controller.go:606] quota admission added evaluator for: poddisruptionbudgets.policy
W0225 22:02:56.985] error: min-available and max-unavailable cannot be both specified
I0225 22:02:57.085] core.sh:264: Successful describe pods --namespace=test-kubectl-describe-pod env-test-pod:
I0225 22:02:57.085] Name:               env-test-pod
I0225 22:02:57.085] Namespace:          test-kubectl-describe-pod
I0225 22:02:57.085] Priority:           0
I0225 22:02:57.086] PriorityClassName:  <none>
I0225 22:02:57.086] Node:               <none>
... skipping 145 lines ...
W0225 22:03:08.674] I0225 22:03:07.733403   47447 namespace_controller.go:171] Namespace has been deleted test-kubectl-describe-pod
W0225 22:03:08.674] I0225 22:03:08.264523   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132183-11763", Name:"modified", UID:"224fe8d6-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"377", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: modified-lcrzx
I0225 22:03:08.811] core.sh:434: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:03:08.961] (Bpod/valid-pod created
I0225 22:03:09.046] core.sh:438: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:03:09.182] (BSuccessful
I0225 22:03:09.183] message:Error from server: cannot restore map from string
I0225 22:03:09.183] has:cannot restore map from string
I0225 22:03:09.266] Successful
I0225 22:03:09.266] message:pod/valid-pod patched (no change)
I0225 22:03:09.266] has:patched (no change)
I0225 22:03:09.348] pod/valid-pod patched
I0225 22:03:09.432] core.sh:455: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
... skipping 5 lines ...
I0225 22:03:09.936] (Bpod/valid-pod patched
I0225 22:03:10.024] core.sh:470: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
I0225 22:03:10.095] (Bpod/valid-pod patched
I0225 22:03:10.180] core.sh:475: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
I0225 22:03:10.330] (Bpod/valid-pod patched
I0225 22:03:10.420] core.sh:491: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0225 22:03:10.586] (B+++ [0225 22:03:10] "kubectl patch with resourceVersion 496" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
W0225 22:03:10.687] E0225 22:03:09.175806   44034 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"cannot restore map from string"}
I0225 22:03:10.805] pod "valid-pod" deleted
I0225 22:03:10.816] pod/valid-pod replaced
I0225 22:03:10.902] core.sh:515: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
I0225 22:03:11.055] (BSuccessful
I0225 22:03:11.056] message:error: --grace-period must have --force specified
I0225 22:03:11.056] has:\-\-grace-period must have \-\-force specified
I0225 22:03:11.195] Successful
I0225 22:03:11.195] message:error: --timeout must have --force specified
I0225 22:03:11.196] has:\-\-timeout must have \-\-force specified
I0225 22:03:11.336] node/node-v1-test created
W0225 22:03:11.437] W0225 22:03:11.336826   47447 actual_state_of_world.go:503] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
I0225 22:03:11.538] node/node-v1-test replaced
I0225 22:03:11.583] core.sh:552: Successful get node node-v1-test {{.metadata.annotations.a}}: b
I0225 22:03:11.651] (Bnode "node-v1-test" deleted
I0225 22:03:11.738] core.sh:559: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
I0225 22:03:11.982] (Bcore.sh:562: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
I0225 22:03:12.884] (Bcore.sh:575: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
... skipping 57 lines ...
I0225 22:03:16.790] save-config.sh:31: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:03:16.936] (Bpod/test-pod created
W0225 22:03:17.036] Edit cancelled, no changes made.
W0225 22:03:17.037] Edit cancelled, no changes made.
W0225 22:03:17.037] Edit cancelled, no changes made.
W0225 22:03:17.037] Edit cancelled, no changes made.
W0225 22:03:17.037] error: 'name' already has a value (valid-pod), and --overwrite is false
W0225 22:03:17.038] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0225 22:03:17.038] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0225 22:03:17.138] pod "test-pod" deleted
I0225 22:03:17.139] +++ [0225 22:03:17] Creating namespace namespace-1551132197-3653
I0225 22:03:17.194] namespace/namespace-1551132197-3653 created
I0225 22:03:17.262] Context "test" modified.
... skipping 41 lines ...
I0225 22:03:20.408] +++ Running case: test-cmd.run_kubectl_create_error_tests 
I0225 22:03:20.411] +++ working dir: /go/src/k8s.io/kubernetes
I0225 22:03:20.414] +++ command: run_kubectl_create_error_tests
I0225 22:03:20.426] +++ [0225 22:03:20] Creating namespace namespace-1551132200-16473
I0225 22:03:20.502] namespace/namespace-1551132200-16473 created
I0225 22:03:20.573] Context "test" modified.
I0225 22:03:20.580] +++ [0225 22:03:20] Testing kubectl create with error
W0225 22:03:20.681] Error: required flag(s) "filename" not set
W0225 22:03:20.681] 
W0225 22:03:20.681] 
W0225 22:03:20.681] Examples:
W0225 22:03:20.681]   # Create a pod using the data in pod.json.
W0225 22:03:20.681]   kubectl create -f ./pod.json
W0225 22:03:20.682]   
... skipping 38 lines ...
W0225 22:03:20.687]   kubectl create -f FILENAME [options]
W0225 22:03:20.687] 
W0225 22:03:20.687] Use "kubectl <command> --help" for more information about a given command.
W0225 22:03:20.687] Use "kubectl options" for a list of global command-line options (applies to all commands).
W0225 22:03:20.687] 
W0225 22:03:20.687] required flag(s) "filename" not set
I0225 22:03:20.807] +++ [0225 22:03:20] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
W0225 22:03:20.908] kubectl convert is DEPRECATED and will be removed in a future version.
W0225 22:03:20.908] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0225 22:03:21.009] +++ exit code: 0
I0225 22:03:21.035] Recording: run_kubectl_apply_tests
I0225 22:03:21.035] Running command: run_kubectl_apply_tests
I0225 22:03:21.058] 
... skipping 19 lines ...
W0225 22:03:23.099] I0225 22:03:23.099025   44034 clientconn.go:551] parsed scheme: ""
W0225 22:03:23.100] I0225 22:03:23.099068   44034 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0225 22:03:23.100] I0225 22:03:23.099104   44034 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0225 22:03:23.100] I0225 22:03:23.099143   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:03:23.100] I0225 22:03:23.099736   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:03:23.102] I0225 22:03:23.102222   44034 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
W0225 22:03:23.191] Error from server (NotFound): resources.mygroup.example.com "myobj" not found
I0225 22:03:23.292] kind.mygroup.example.com/myobj serverside-applied (server dry run)
I0225 22:03:23.292] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0225 22:03:23.309] +++ exit code: 0
I0225 22:03:23.359] Recording: run_kubectl_run_tests
I0225 22:03:23.359] Running command: run_kubectl_run_tests
I0225 22:03:23.380] 
... skipping 84 lines ...
I0225 22:03:25.809] Context "test" modified.
I0225 22:03:25.815] +++ [0225 22:03:25] Testing kubectl create filter
I0225 22:03:25.900] create.sh:30: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:03:26.058] (Bpod/selector-test-pod created
I0225 22:03:26.157] create.sh:34: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
I0225 22:03:26.242] (BSuccessful
I0225 22:03:26.243] message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
I0225 22:03:26.243] has:pods "selector-test-pod-dont-apply" not found
I0225 22:03:26.321] pod "selector-test-pod" deleted
I0225 22:03:26.342] +++ exit code: 0
I0225 22:03:26.397] Recording: run_kubectl_apply_deployments_tests
I0225 22:03:26.398] Running command: run_kubectl_apply_deployments_tests
I0225 22:03:26.419] 
... skipping 40 lines ...
I0225 22:03:28.195] (Bdeployment.extensions "my-depl" deleted
I0225 22:03:28.204] replicaset.extensions "my-depl-64775887d7" deleted
I0225 22:03:28.209] replicaset.extensions "my-depl-656cffcbcc" deleted
I0225 22:03:28.222] pod "my-depl-64775887d7-92x7n" deleted
I0225 22:03:28.227] pod "my-depl-656cffcbcc-4gqk9" deleted
W0225 22:03:28.327] I0225 22:03:28.200615   44034 controller.go:606] quota admission added evaluator for: replicasets.extensions
W0225 22:03:28.328] E0225 22:03:28.229178   47447 replica_set.go:450] Sync "namespace-1551132206-7214/my-depl-656cffcbcc" failed with replicasets.apps "my-depl-656cffcbcc" not found
W0225 22:03:28.328] E0225 22:03:28.234054   47447 replica_set.go:450] Sync "namespace-1551132206-7214/my-depl-656cffcbcc" failed with replicasets.apps "my-depl-656cffcbcc" not found
I0225 22:03:28.428] apps.sh:137: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:03:28.433] (Bapps.sh:138: Successful get replicasets {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:03:28.521] (Bapps.sh:139: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:03:28.606] (Bapps.sh:143: Successful get deployments {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:03:28.780] (Bdeployment.extensions/nginx created
W0225 22:03:28.880] I0225 22:03:28.785909   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132206-7214", Name:"nginx", UID:"2e8aea43-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-776cc67f78 to 3
W0225 22:03:28.881] I0225 22:03:28.790254   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132206-7214", Name:"nginx-776cc67f78", UID:"2e8bbe31-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-t88xg
W0225 22:03:28.881] I0225 22:03:28.795871   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132206-7214", Name:"nginx-776cc67f78", UID:"2e8bbe31-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-8vq22
W0225 22:03:28.881] I0225 22:03:28.797677   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132206-7214", Name:"nginx-776cc67f78", UID:"2e8bbe31-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-776cc67f78-vzscf
I0225 22:03:28.982] apps.sh:147: Successful get deployment nginx {{.metadata.name}}: nginx
I0225 22:03:33.128] (BSuccessful
I0225 22:03:33.129] message:Error from server (Conflict): error when applying patch:
I0225 22:03:33.129] {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1551132206-7214\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
I0225 22:03:33.129] to:
I0225 22:03:33.129] Resource: "extensions/v1beta1, Resource=deployments", GroupVersionKind: "extensions/v1beta1, Kind=Deployment"
I0225 22:03:33.129] Name: "nginx", Namespace: "namespace-1551132206-7214"
I0225 22:03:33.131] Object: &{map["metadata":map["labels":map["name":"nginx"] "managedFields":[map["manager":"kube-controller-manager" "operation":"Update" "apiVersion":"apps/v1" "time":"2019-02-25T22:03:28Z" "fields":map["f:metadata":map["f:annotations":map["f:deployment.kubernetes.io/revision":map[]]] "f:status":map["f:updatedReplicas":map[] "f:conditions":map[".":map[] "k:{\"type\":\"Available\"}":map[".":map[] "f:lastTransitionTime":map[] "f:lastUpdateTime":map[] "f:message":map[] "f:reason":map[] "f:status":map[] "f:type":map[]]] "f:observedGeneration":map[] "f:replicas":map[] "f:unavailableReplicas":map[]]]] map["fields":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]] "f:annotations":map[".":map[] "f:kubectl.kubernetes.io/last-applied-configuration":map[]]] "f:spec":map["f:progressDeadlineSeconds":map[] "f:replicas":map[] "f:revisionHistoryLimit":map[] "f:selector":map[".":map[] "f:matchLabels":map[".":map[] "f:name":map[]]] "f:strategy":map["f:rollingUpdate":map[".":map[] "f:maxSurge":map[] "f:maxUnavailable":map[]] "f:type":map[]] "f:template":map["f:metadata":map["f:labels":map[".":map[] "f:name":map[]]] "f:spec":map["f:dnsPolicy":map[] "f:restartPolicy":map[] "f:schedulerName":map[] "f:securityContext":map[] "f:terminationGracePeriodSeconds":map[] "f:containers":map["k:{\"name\":\"nginx\"}":map["f:terminationMessagePath":map[] "f:terminationMessagePolicy":map[] ".":map[] "f:image":map[] "f:imagePullPolicy":map[] "f:name":map[] "f:ports":map[".":map[] "k:{\"containerPort\":80,\"protocol\":\"TCP\"}":map[".":map[] "f:containerPort":map[] "f:protocol":map[]]] "f:resources":map[]]]]]]] "manager":"kubectl" "operation":"Update" "apiVersion":"extensions/v1beta1" "time":"2019-02-25T22:03:28Z"]] "name":"nginx" "namespace":"namespace-1551132206-7214" "selfLink":"/apis/extensions/v1beta1/namespaces/namespace-1551132206-7214/deployments/nginx" "uid":"2e8aea43-3949-11e9-a41f-0242ac110002" "resourceVersion":"602" "creationTimestamp":"2019-02-25T22:03:28Z" "generation":'\x01' "annotations":map["deployment.kubernetes.io/revision":"1" "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1551132206-7214\"},\"spec\":{\"replicas\":3,\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx1\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"]] "spec":map["replicas":'\x03' "selector":map["matchLabels":map["name":"nginx1"]] "template":map["metadata":map["creationTimestamp":<nil> "labels":map["name":"nginx1"]] "spec":map["restartPolicy":"Always" "terminationGracePeriodSeconds":'\x1e' "dnsPolicy":"ClusterFirst" "securityContext":map[] "schedulerName":"default-scheduler" "containers":[map["terminationMessagePath":"/dev/termination-log" "terminationMessagePolicy":"File" "imagePullPolicy":"IfNotPresent" "name":"nginx" "image":"k8s.gcr.io/nginx:test-cmd" "ports":[map["containerPort":'P' "protocol":"TCP"]] "resources":map[]]]]] "strategy":map["type":"RollingUpdate" "rollingUpdate":map["maxUnavailable":'\x01' "maxSurge":'\x01']] "revisionHistoryLimit":%!q(int64=+2147483647) "progressDeadlineSeconds":%!q(int64=+2147483647)] "status":map["observedGeneration":'\x01' "replicas":'\x03' "updatedReplicas":'\x03' "unavailableReplicas":'\x03' "conditions":[map["status":"False" "lastUpdateTime":"2019-02-25T22:03:28Z" "lastTransitionTime":"2019-02-25T22:03:28Z" "reason":"MinimumReplicasUnavailable" "message":"Deployment does not have minimum availability." "type":"Available"]]] "kind":"Deployment" "apiVersion":"extensions/v1beta1"]}
I0225 22:03:33.132] for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.extensions "nginx": the object has been modified; please apply your changes to the latest version and try again
I0225 22:03:33.132] has:Error from server (Conflict)
W0225 22:03:34.809] I0225 22:03:34.809112   47447 horizontal.go:320] Horizontal Pod Autoscaler frontend has been deleted in namespace-1551132197-4921
W0225 22:03:37.579] E0225 22:03:37.579034   47447 replica_set.go:450] Sync "namespace-1551132206-7214/nginx-776cc67f78" failed with replicasets.apps "nginx-776cc67f78" not found
W0225 22:03:37.583] E0225 22:03:37.582491   47447 replica_set.go:450] Sync "namespace-1551132206-7214/nginx-776cc67f78" failed with replicasets.apps "nginx-776cc67f78" not found
I0225 22:03:38.403] deployment.extensions/nginx configured
I0225 22:03:38.500] Successful
I0225 22:03:38.500] message:        "name": "nginx2"
I0225 22:03:38.500]           "name": "nginx2"
I0225 22:03:38.501] has:"name": "nginx2"
W0225 22:03:38.601] I0225 22:03:38.409136   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132206-7214", Name:"nginx", UID:"344737fd-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7bd4fbc645 to 3
W0225 22:03:38.602] I0225 22:03:38.414265   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132206-7214", Name:"nginx-7bd4fbc645", UID:"34482591-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-tb6qn
W0225 22:03:38.602] I0225 22:03:38.419413   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132206-7214", Name:"nginx-7bd4fbc645", UID:"34482591-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-56tmn
W0225 22:03:38.602] I0225 22:03:38.420318   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132206-7214", Name:"nginx-7bd4fbc645", UID:"34482591-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-pw259
W0225 22:03:42.784] E0225 22:03:42.783713   47447 replica_set.go:450] Sync "namespace-1551132206-7214/nginx-7bd4fbc645" failed with replicasets.apps "nginx-7bd4fbc645" not found
W0225 22:03:43.770] I0225 22:03:43.769481   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132206-7214", Name:"nginx", UID:"37790bc7-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"659", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-7bd4fbc645 to 3
W0225 22:03:43.776] I0225 22:03:43.775469   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132206-7214", Name:"nginx-7bd4fbc645", UID:"3779fded-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-9t9ds
W0225 22:03:43.782] I0225 22:03:43.781846   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132206-7214", Name:"nginx-7bd4fbc645", UID:"3779fded-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-zzbg6
W0225 22:03:43.784] I0225 22:03:43.783519   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132206-7214", Name:"nginx-7bd4fbc645", UID:"3779fded-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"660", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-7bd4fbc645-mxwmh
I0225 22:03:43.884] Successful
I0225 22:03:43.884] message:The Deployment "nginx" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"name":"nginx3"}: `selector` does not match template `labels`
... skipping 159 lines ...
I0225 22:03:45.856] +++ [0225 22:03:45] Creating namespace namespace-1551132225-21796
I0225 22:03:45.932] namespace/namespace-1551132225-21796 created
I0225 22:03:46.001] Context "test" modified.
I0225 22:03:46.008] +++ [0225 22:03:46] Testing kubectl get
I0225 22:03:46.099] get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:03:46.181] (BSuccessful
I0225 22:03:46.181] message:Error from server (NotFound): pods "abc" not found
I0225 22:03:46.182] has:pods "abc" not found
I0225 22:03:46.268] get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:03:46.353] (BSuccessful
I0225 22:03:46.354] message:Error from server (NotFound): pods "abc" not found
I0225 22:03:46.354] has:pods "abc" not found
I0225 22:03:46.443] get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:03:46.531] (BSuccessful
I0225 22:03:46.531] message:{
I0225 22:03:46.531]     "apiVersion": "v1",
I0225 22:03:46.531]     "items": [],
... skipping 23 lines ...
I0225 22:03:46.865] has not:No resources found
I0225 22:03:46.948] Successful
I0225 22:03:46.948] message:NAME
I0225 22:03:46.948] has not:No resources found
I0225 22:03:47.041] get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:03:47.137] (BSuccessful
I0225 22:03:47.137] message:error: the server doesn't have a resource type "foobar"
I0225 22:03:47.138] has not:No resources found
I0225 22:03:47.218] Successful
I0225 22:03:47.218] message:No resources found.
I0225 22:03:47.218] has:No resources found
I0225 22:03:47.301] Successful
I0225 22:03:47.301] message:
I0225 22:03:47.301] has not:No resources found
I0225 22:03:47.380] Successful
I0225 22:03:47.381] message:No resources found.
I0225 22:03:47.381] has:No resources found
I0225 22:03:47.468] get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:03:47.551] (BSuccessful
I0225 22:03:47.552] message:Error from server (NotFound): pods "abc" not found
I0225 22:03:47.552] has:pods "abc" not found
I0225 22:03:47.553] FAIL!
I0225 22:03:47.554] message:Error from server (NotFound): pods "abc" not found
I0225 22:03:47.554] has not:List
I0225 22:03:47.554] 99 /go/src/k8s.io/kubernetes/test/cmd/../../test/cmd/get.sh
I0225 22:03:47.665] Successful
I0225 22:03:47.666] message:I0225 22:03:47.616207   58591 loader.go:359] Config loaded from file /tmp/tmp.MpP3vN2WwS/.kube/config
I0225 22:03:47.666] I0225 22:03:47.617710   58591 round_trippers.go:438] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0225 22:03:47.666] I0225 22:03:47.637399   58591 round_trippers.go:438] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 2 milliseconds
... skipping 701 lines ...
I0225 22:03:51.102] }
I0225 22:03:51.183] get.sh:155: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:03:51.409] (B<no value>Successful
I0225 22:03:51.410] message:valid-pod:
I0225 22:03:51.410] has:valid-pod:
I0225 22:03:51.488] Successful
I0225 22:03:51.488] message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
I0225 22:03:51.488] 	template was:
I0225 22:03:51.488] 		{.missing}
I0225 22:03:51.489] 	object given to jsonpath engine was:
I0225 22:03:51.490] 		map[string]interface {}{"kind":"Pod", "apiVersion":"v1", "metadata":map[string]interface {}{"namespace":"namespace-1551132230-18024", "selfLink":"/api/v1/namespaces/namespace-1551132230-18024/pods/valid-pod", "uid":"3bcbfc7c-3949-11e9-a41f-0242ac110002", "resourceVersion":"698", "creationTimestamp":"2019-02-25T22:03:51Z", "labels":map[string]interface {}{"name":"valid-pod"}, "managedFields":[]interface {}{map[string]interface {}{"manager":"kubectl", "operation":"Update", "apiVersion":"v1", "time":"2019-02-25T22:03:51Z", "fields":map[string]interface {}{"f:metadata":map[string]interface {}{"f:labels":map[string]interface {}{".":map[string]interface {}{}, "f:name":map[string]interface {}{}}}, "f:spec":map[string]interface {}{"f:enableServiceLinks":map[string]interface {}{}, "f:priority":map[string]interface {}{}, "f:restartPolicy":map[string]interface {}{}, "f:schedulerName":map[string]interface {}{}, "f:securityContext":map[string]interface {}{}, "f:terminationGracePeriodSeconds":map[string]interface {}{}, "f:containers":map[string]interface {}{"k:{\"name\":\"kubernetes-serve-hostname\"}":map[string]interface {}{"f:terminationMessagePolicy":map[string]interface {}{}, ".":map[string]interface {}{}, "f:image":map[string]interface {}{}, "f:imagePullPolicy":map[string]interface {}{}, "f:name":map[string]interface {}{}, "f:resources":map[string]interface {}{"f:requests":map[string]interface {}{"f:memory":map[string]interface {}{}, ".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}}, ".":map[string]interface {}{}, "f:limits":map[string]interface {}{".":map[string]interface {}{}, "f:cpu":map[string]interface {}{}, "f:memory":map[string]interface {}{}}}, "f:terminationMessagePath":map[string]interface {}{}}}, "f:dnsPolicy":map[string]interface {}{}}}}}, "name":"valid-pod"}, "spec":map[string]interface {}{"enableServiceLinks":true, "containers":[]interface {}{map[string]interface {}{"terminationMessagePolicy":"File", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "image":"k8s.gcr.io/serve_hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"memory":"512Mi", "cpu":"1"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log"}}, "restartPolicy":"Always", "terminationGracePeriodSeconds":30, "dnsPolicy":"ClusterFirst", "securityContext":map[string]interface {}{}, "schedulerName":"default-scheduler", "priority":0}, "status":map[string]interface {}{"qosClass":"Guaranteed", "phase":"Pending"}}
I0225 22:03:51.490] has:missing is not found
I0225 22:03:51.565] Successful
I0225 22:03:51.566] message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
I0225 22:03:51.566] 	template was:
I0225 22:03:51.566] 		{{.missing}}
I0225 22:03:51.566] 	raw data was:
I0225 22:03:51.567] 		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2019-02-25T22:03:51Z","labels":{"name":"valid-pod"},"managedFields":[{"apiVersion":"v1","fields":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"kubernetes-serve-hostname\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{".":{},"f:limits":{".":{},"f:cpu":{},"f:memory":{}},"f:requests":{".":{},"f:cpu":{},"f:memory":{}}},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:priority":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}},"manager":"kubectl","operation":"Update","time":"2019-02-25T22:03:51Z"}],"name":"valid-pod","namespace":"namespace-1551132230-18024","resourceVersion":"698","selfLink":"/api/v1/namespaces/namespace-1551132230-18024/pods/valid-pod","uid":"3bcbfc7c-3949-11e9-a41f-0242ac110002"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
I0225 22:03:51.567] 	object given to template engine was:
I0225 22:03:51.568] 		map[apiVersion:v1 kind:Pod metadata:map[uid:3bcbfc7c-3949-11e9-a41f-0242ac110002 creationTimestamp:2019-02-25T22:03:51Z labels:map[name:valid-pod] managedFields:[map[fields:map[f:metadata:map[f:labels:map[.:map[] f:name:map[]]] f:spec:map[f:restartPolicy:map[] f:schedulerName:map[] f:securityContext:map[] f:terminationGracePeriodSeconds:map[] f:containers:map[k:{"name":"kubernetes-serve-hostname"}:map[f:name:map[] f:resources:map[.:map[] f:limits:map[.:map[] f:cpu:map[] f:memory:map[]] f:requests:map[f:memory:map[] .:map[] f:cpu:map[]]] f:terminationMessagePath:map[] f:terminationMessagePolicy:map[] .:map[] f:image:map[] f:imagePullPolicy:map[]]] f:dnsPolicy:map[] f:enableServiceLinks:map[] f:priority:map[]]] manager:kubectl operation:Update time:2019-02-25T22:03:51Z apiVersion:v1]] name:valid-pod namespace:namespace-1551132230-18024 resourceVersion:698 selfLink:/api/v1/namespaces/namespace-1551132230-18024/pods/valid-pod] spec:map[enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30 containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[memory:512Mi cpu:1] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst] status:map[phase:Pending qosClass:Guaranteed]]
I0225 22:03:51.568] has:map has no entry for key "missing"
W0225 22:03:51.668] error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
W0225 22:03:52.639] E0225 22:03:52.638544   58977 streamwatcher.go:109] Unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)
I0225 22:03:52.739] Successful
I0225 22:03:52.740] message:NAME        READY   STATUS    RESTARTS   AGE
I0225 22:03:52.740] valid-pod   0/1     Pending   0          0s
I0225 22:03:52.740] has:STATUS
I0225 22:03:52.740] Successful
... skipping 152 lines ...
I0225 22:03:54.919]   terminationGracePeriodSeconds: 30
I0225 22:03:54.919] status:
I0225 22:03:54.919]   phase: Pending
I0225 22:03:54.919]   qosClass: Guaranteed
I0225 22:03:54.919] has:name: valid-pod
I0225 22:03:54.919] Successful
I0225 22:03:54.920] message:Error from server (NotFound): pods "invalid-pod" not found
I0225 22:03:54.920] has:"invalid-pod" not found
I0225 22:03:54.975] pod "valid-pod" deleted
I0225 22:03:55.065] get.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:03:55.216] (Bpod/redis-master created
I0225 22:03:55.220] pod/valid-pod created
I0225 22:03:55.309] Successful
... skipping 247 lines ...
I0225 22:03:59.392] Running command: run_create_secret_tests
I0225 22:03:59.410] 
I0225 22:03:59.413] +++ Running case: test-cmd.run_create_secret_tests 
I0225 22:03:59.415] +++ working dir: /go/src/k8s.io/kubernetes
I0225 22:03:59.417] +++ command: run_create_secret_tests
I0225 22:03:59.504] Successful
I0225 22:03:59.504] message:Error from server (NotFound): secrets "mysecret" not found
I0225 22:03:59.504] has:secrets "mysecret" not found
W0225 22:03:59.605] I0225 22:03:58.579378   44034 clientconn.go:551] parsed scheme: ""
W0225 22:03:59.605] I0225 22:03:58.579415   44034 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0225 22:03:59.605] I0225 22:03:58.579449   44034 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0225 22:03:59.605] I0225 22:03:58.579485   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:03:59.605] I0225 22:03:58.579828   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:03:59.605] No resources found.
W0225 22:03:59.605] No resources found.
I0225 22:03:59.706] Successful
I0225 22:03:59.706] message:Error from server (NotFound): secrets "mysecret" not found
I0225 22:03:59.706] has:secrets "mysecret" not found
I0225 22:03:59.706] Successful
I0225 22:03:59.706] message:user-specified
I0225 22:03:59.706] has:user-specified
I0225 22:03:59.729] Successful
I0225 22:03:59.801] {"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-create-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-create-cm","uid":"41085b7d-3949-11e9-a41f-0242ac110002","resourceVersion":"773","creationTimestamp":"2019-02-25T22:03:59Z"}}
... skipping 147 lines ...
I0225 22:04:02.618] has:Timeout exceeded while reading body
I0225 22:04:02.695] Successful
I0225 22:04:02.695] message:NAME        READY   STATUS    RESTARTS   AGE
I0225 22:04:02.695] valid-pod   0/1     Pending   0          1s
I0225 22:04:02.695] has:valid-pod
I0225 22:04:02.763] Successful
I0225 22:04:02.763] message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
I0225 22:04:02.763] has:Invalid timeout value
I0225 22:04:02.840] pod "valid-pod" deleted
I0225 22:04:02.860] +++ exit code: 0
I0225 22:04:02.909] Recording: run_crd_tests
I0225 22:04:02.910] Running command: run_crd_tests
I0225 22:04:02.931] 
... skipping 221 lines ...
I0225 22:04:07.155] foo.company.com/test patched
I0225 22:04:07.243] crd.sh:237: Successful get foos/test {{.patched}}: value1
I0225 22:04:07.323] (Bfoo.company.com/test patched
I0225 22:04:07.411] crd.sh:239: Successful get foos/test {{.patched}}: value2
I0225 22:04:07.493] (Bfoo.company.com/test patched
I0225 22:04:07.580] crd.sh:241: Successful get foos/test {{.patched}}: <no value>
I0225 22:04:07.727] (B+++ [0225 22:04:07] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
I0225 22:04:07.790] {
I0225 22:04:07.790]     "apiVersion": "company.com/v1",
I0225 22:04:07.791]     "kind": "Foo",
I0225 22:04:07.791]     "metadata": {
I0225 22:04:07.791]         "annotations": {
I0225 22:04:07.791]             "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 289 lines ...
I0225 22:04:12.959] crd.sh:408: Successful get foos/test-list {{.otherField}}: <no value>
I0225 22:04:13.048] (Bcrd.sh:409: Successful get bars/test-list {{.otherField}}: <no value>
I0225 22:04:13.133] (Bcrd.sh:412: Successful get foos/test-list {{.newField}}: <no value>
I0225 22:04:13.217] (Bcrd.sh:413: Successful get bars/test-list {{.newField}}: <no value>
I0225 22:04:13.375] (Bfoo.company.com/test-list configured
I0225 22:04:13.382] bar.company.com/test-list configured
W0225 22:04:13.483] E0225 22:04:03.591540   47447 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "company.com/v1, Resource=foos": unable to monitor quota for resource "company.com/v1, Resource=foos"]
W0225 22:04:13.484] I0225 22:04:04.113108   47447 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0225 22:04:13.484] I0225 22:04:04.114541   44034 clientconn.go:551] parsed scheme: ""
W0225 22:04:13.484] I0225 22:04:04.114576   44034 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0225 22:04:13.484] I0225 22:04:04.114617   44034 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0225 22:04:13.484] I0225 22:04:04.114669   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:04:13.484] I0225 22:04:04.115124   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 40 lines ...
W0225 22:04:20.537] I0225 22:04:20.536370   44034 clientconn.go:557] scheme "" not registered, fallback to default scheme
W0225 22:04:20.537] I0225 22:04:20.536407   44034 resolver_conn_wrapper.go:116] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
W0225 22:04:20.537] I0225 22:04:20.536503   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:04:20.537] I0225 22:04:20.537019   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0225 22:04:20.742] crd.sh:459: Successful get bars {{len .items}}: 0
I0225 22:04:20.903] (Bcustomresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
W0225 22:04:21.004] Error from server (NotFound): namespaces "non-native-resources" not found
I0225 22:04:21.104] customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
I0225 22:04:21.117] customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
I0225 22:04:21.225] customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
I0225 22:04:21.259] +++ exit code: 0
I0225 22:04:21.332] Recording: run_cmd_with_img_tests
I0225 22:04:21.332] Running command: run_cmd_with_img_tests
... skipping 7 lines ...
I0225 22:04:21.521] +++ [0225 22:04:21] Testing cmd with image
I0225 22:04:21.617] Successful
I0225 22:04:21.617] message:deployment.apps/test1 created
I0225 22:04:21.617] has:deployment.apps/test1 created
I0225 22:04:21.701] deployment.extensions "test1" deleted
I0225 22:04:21.780] Successful
I0225 22:04:21.780] message:error: Invalid image name "InvalidImageName": invalid reference format
I0225 22:04:21.780] has:error: Invalid image name "InvalidImageName": invalid reference format
I0225 22:04:21.794] +++ exit code: 0
I0225 22:04:21.842] +++ [0225 22:04:21] Testing recursive resources
I0225 22:04:21.848] +++ [0225 22:04:21] Creating namespace namespace-1551132261-27606
I0225 22:04:21.921] namespace/namespace-1551132261-27606 created
I0225 22:04:21.989] Context "test" modified.
I0225 22:04:22.080] generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:04:22.335] (Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:22.337] (BSuccessful
I0225 22:04:22.337] message:pod/busybox0 created
I0225 22:04:22.337] pod/busybox1 created
I0225 22:04:22.338] error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0225 22:04:22.338] has:error validating data: kind not set
I0225 22:04:22.428] generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:22.605] (Bgeneric-resources.sh:219: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
I0225 22:04:22.607] (BSuccessful
I0225 22:04:22.608] message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:04:22.608] has:Object 'Kind' is missing
I0225 22:04:22.700] generic-resources.sh:226: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:22.959] (Bgeneric-resources.sh:230: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0225 22:04:22.961] (BSuccessful
I0225 22:04:22.962] message:pod/busybox0 replaced
I0225 22:04:22.962] pod/busybox1 replaced
I0225 22:04:22.962] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0225 22:04:22.962] has:error validating data: kind not set
I0225 22:04:23.051] generic-resources.sh:235: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:23.143] (BSuccessful
I0225 22:04:23.143] message:Name:               busybox0
I0225 22:04:23.143] Namespace:          namespace-1551132261-27606
I0225 22:04:23.143] Priority:           0
I0225 22:04:23.144] PriorityClassName:  <none>
... skipping 159 lines ...
I0225 22:04:23.156] has:Object 'Kind' is missing
I0225 22:04:23.241] generic-resources.sh:245: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:23.430] (Bgeneric-resources.sh:249: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
I0225 22:04:23.432] (BSuccessful
I0225 22:04:23.432] message:pod/busybox0 annotated
I0225 22:04:23.432] pod/busybox1 annotated
I0225 22:04:23.432] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:04:23.432] has:Object 'Kind' is missing
I0225 22:04:23.523] generic-resources.sh:254: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:23.804] (Bgeneric-resources.sh:258: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
I0225 22:04:23.805] (BSuccessful
I0225 22:04:23.805] message:Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0225 22:04:23.806] pod/busybox0 configured
I0225 22:04:23.806] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
I0225 22:04:23.806] pod/busybox1 configured
I0225 22:04:23.806] error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0225 22:04:23.806] has:error validating data: kind not set
I0225 22:04:23.896] generic-resources.sh:264: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:04:24.045] (Bdeployment.apps/nginx created
W0225 22:04:24.146] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0225 22:04:24.146] I0225 22:04:21.605731   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132261-21113", Name:"test1", UID:"4e0696c1-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"888", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set test1-848d5d4b47 to 1
W0225 22:04:24.146] I0225 22:04:21.612924   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132261-21113", Name:"test1-848d5d4b47", UID:"4e078525-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"889", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test1-848d5d4b47-pwwl5
W0225 22:04:24.146] I0225 22:04:24.051715   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132261-27606", Name:"nginx", UID:"4f7bb11e-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"913", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-5f7cff5b56 to 3
... skipping 49 lines ...
I0225 22:04:24.498] deployment.extensions "nginx" deleted
I0225 22:04:24.601] generic-resources.sh:280: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:24.769] (Bgeneric-resources.sh:284: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:24.772] (BSuccessful
I0225 22:04:24.772] message:kubectl convert is DEPRECATED and will be removed in a future version.
I0225 22:04:24.772] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0225 22:04:24.773] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:04:24.773] has:Object 'Kind' is missing
I0225 22:04:24.862] generic-resources.sh:289: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:24.947] (BSuccessful
I0225 22:04:24.948] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:04:24.948] has:busybox0:busybox1:
I0225 22:04:24.950] Successful
I0225 22:04:24.950] message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:04:24.950] has:Object 'Kind' is missing
I0225 22:04:25.043] generic-resources.sh:298: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:25.135] (Bpod/busybox0 labeled pod/busybox1 labeled error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
W0225 22:04:25.236] kubectl convert is DEPRECATED and will be removed in a future version.
W0225 22:04:25.236] In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
I0225 22:04:25.337] generic-resources.sh:303: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
I0225 22:04:25.337] (BSuccessful
I0225 22:04:25.337] message:pod/busybox0 labeled
I0225 22:04:25.337] pod/busybox1 labeled
I0225 22:04:25.338] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:04:25.338] has:Object 'Kind' is missing
I0225 22:04:25.347] generic-resources.sh:308: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:25.438] (Bpod/busybox0 patched pod/busybox1 patched error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:04:25.532] generic-resources.sh:313: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
I0225 22:04:25.535] (BSuccessful
I0225 22:04:25.535] message:pod/busybox0 patched
I0225 22:04:25.535] pod/busybox1 patched
I0225 22:04:25.536] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:04:25.536] has:Object 'Kind' is missing
I0225 22:04:25.626] generic-resources.sh:318: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:25.815] (Bgeneric-resources.sh:322: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:04:25.817] (BSuccessful
I0225 22:04:25.817] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0225 22:04:25.817] pod "busybox0" force deleted
I0225 22:04:25.818] pod "busybox1" force deleted
I0225 22:04:25.818] error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
I0225 22:04:25.818] has:Object 'Kind' is missing
I0225 22:04:25.906] generic-resources.sh:327: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:04:26.060] (Breplicationcontroller/busybox0 created
I0225 22:04:26.065] replicationcontroller/busybox1 created
I0225 22:04:26.161] generic-resources.sh:331: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:26.249] (Bgeneric-resources.sh:336: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:26.341] (Bgeneric-resources.sh:337: Successful get rc busybox0 {{.spec.replicas}}: 1
I0225 22:04:26.427] (Bgeneric-resources.sh:338: Successful get rc busybox1 {{.spec.replicas}}: 1
I0225 22:04:26.609] (Bgeneric-resources.sh:343: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0225 22:04:26.697] (Bgeneric-resources.sh:344: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
I0225 22:04:26.699] (BSuccessful
I0225 22:04:26.699] message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
I0225 22:04:26.699] horizontalpodautoscaler.autoscaling/busybox1 autoscaled
I0225 22:04:26.699] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:04:26.699] has:Object 'Kind' is missing
I0225 22:04:26.778] horizontalpodautoscaler.autoscaling "busybox0" deleted
I0225 22:04:26.863] horizontalpodautoscaler.autoscaling "busybox1" deleted
I0225 22:04:26.960] generic-resources.sh:352: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:27.051] (Bgeneric-resources.sh:353: Successful get rc busybox0 {{.spec.replicas}}: 1
I0225 22:04:27.138] (Bgeneric-resources.sh:354: Successful get rc busybox1 {{.spec.replicas}}: 1
I0225 22:04:27.325] (Bgeneric-resources.sh:358: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0225 22:04:27.413] (Bgeneric-resources.sh:359: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
I0225 22:04:27.415] (BSuccessful
I0225 22:04:27.415] message:service/busybox0 exposed
I0225 22:04:27.415] service/busybox1 exposed
I0225 22:04:27.415] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:04:27.416] has:Object 'Kind' is missing
I0225 22:04:27.505] generic-resources.sh:365: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:27.592] (Bgeneric-resources.sh:366: Successful get rc busybox0 {{.spec.replicas}}: 1
I0225 22:04:27.678] (Bgeneric-resources.sh:367: Successful get rc busybox1 {{.spec.replicas}}: 1
I0225 22:04:27.873] (Bgeneric-resources.sh:371: Successful get rc busybox0 {{.spec.replicas}}: 2
I0225 22:04:27.968] (Bgeneric-resources.sh:372: Successful get rc busybox1 {{.spec.replicas}}: 2
I0225 22:04:27.971] (BSuccessful
I0225 22:04:27.971] message:replicationcontroller/busybox0 scaled
I0225 22:04:27.971] replicationcontroller/busybox1 scaled
I0225 22:04:27.971] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:04:27.971] has:Object 'Kind' is missing
I0225 22:04:28.072] generic-resources.sh:377: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:28.259] (Bgeneric-resources.sh:381: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:04:28.262] (BSuccessful
I0225 22:04:28.262] message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0225 22:04:28.262] replicationcontroller "busybox0" force deleted
I0225 22:04:28.262] replicationcontroller "busybox1" force deleted
I0225 22:04:28.262] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:04:28.263] has:Object 'Kind' is missing
I0225 22:04:28.358] generic-resources.sh:386: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:04:28.519] (Bdeployment.apps/nginx1-deployment created
I0225 22:04:28.526] deployment.apps/nginx0-deployment created
W0225 22:04:28.626] I0225 22:04:25.631294   47447 namespace_controller.go:171] Namespace has been deleted non-native-resources
W0225 22:04:28.627] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0225 22:04:28.627] I0225 22:04:26.065960   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132261-27606", Name:"busybox0", UID:"50af4e9d-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"944", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-lfmjf
W0225 22:04:28.628] I0225 22:04:26.069916   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132261-27606", Name:"busybox1", UID:"50b01c49-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"945", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-gghbw
W0225 22:04:28.628] I0225 22:04:27.774864   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132261-27606", Name:"busybox0", UID:"50af4e9d-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"966", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-xg9xk
W0225 22:04:28.628] I0225 22:04:27.785611   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132261-27606", Name:"busybox1", UID:"50b01c49-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"970", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-pn8zg
W0225 22:04:28.628] error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0225 22:04:28.629] I0225 22:04:28.526559   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132261-27606", Name:"nginx1-deployment", UID:"52267517-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"986", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7c76c6cbb8 to 2
W0225 22:04:28.629] I0225 22:04:28.530123   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132261-27606", Name:"nginx1-deployment-7c76c6cbb8", UID:"52276021-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"987", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-d9ln7
W0225 22:04:28.629] I0225 22:04:28.533609   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132261-27606", Name:"nginx0-deployment", UID:"522766fd-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"988", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-7bb85585d7 to 2
W0225 22:04:28.630] I0225 22:04:28.537145   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132261-27606", Name:"nginx1-deployment-7c76c6cbb8", UID:"52276021-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"987", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7c76c6cbb8-9gm5s
W0225 22:04:28.630] I0225 22:04:28.542177   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132261-27606", Name:"nginx0-deployment-7bb85585d7", UID:"522867e7-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"991", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-pvkvs
W0225 22:04:28.630] I0225 22:04:28.550535   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132261-27606", Name:"nginx0-deployment-7bb85585d7", UID:"522867e7-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"991", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-7bb85585d7-vhpt4
I0225 22:04:28.731] generic-resources.sh:390: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
I0225 22:04:28.751] (Bgeneric-resources.sh:391: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0225 22:04:28.951] (Bgeneric-resources.sh:395: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
I0225 22:04:28.954] (BSuccessful
I0225 22:04:28.954] message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
I0225 22:04:28.954] deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
I0225 22:04:28.955] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0225 22:04:28.955] has:Object 'Kind' is missing
I0225 22:04:29.047] deployment.apps/nginx1-deployment paused
I0225 22:04:29.056] deployment.apps/nginx0-deployment paused
I0225 22:04:29.155] generic-resources.sh:402: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
I0225 22:04:29.157] (BSuccessful
I0225 22:04:29.158] message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
... skipping 10 lines ...
I0225 22:04:29.494] 1         <none>
I0225 22:04:29.494] 
I0225 22:04:29.495] deployment.apps/nginx0-deployment 
I0225 22:04:29.495] REVISION  CHANGE-CAUSE
I0225 22:04:29.495] 1         <none>
I0225 22:04:29.495] 
I0225 22:04:29.495] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0225 22:04:29.495] has:nginx0-deployment
I0225 22:04:29.496] Successful
I0225 22:04:29.496] message:deployment.apps/nginx1-deployment 
I0225 22:04:29.496] REVISION  CHANGE-CAUSE
I0225 22:04:29.496] 1         <none>
I0225 22:04:29.497] 
I0225 22:04:29.497] deployment.apps/nginx0-deployment 
I0225 22:04:29.497] REVISION  CHANGE-CAUSE
I0225 22:04:29.497] 1         <none>
I0225 22:04:29.497] 
I0225 22:04:29.497] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0225 22:04:29.497] has:nginx1-deployment
I0225 22:04:29.498] Successful
I0225 22:04:29.499] message:deployment.apps/nginx1-deployment 
I0225 22:04:29.499] REVISION  CHANGE-CAUSE
I0225 22:04:29.499] 1         <none>
I0225 22:04:29.499] 
I0225 22:04:29.499] deployment.apps/nginx0-deployment 
I0225 22:04:29.499] REVISION  CHANGE-CAUSE
I0225 22:04:29.499] 1         <none>
I0225 22:04:29.499] 
I0225 22:04:29.500] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0225 22:04:29.500] has:Object 'Kind' is missing
I0225 22:04:29.573] deployment.apps "nginx1-deployment" force deleted
I0225 22:04:29.578] deployment.apps "nginx0-deployment" force deleted
W0225 22:04:29.679] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0225 22:04:29.679] error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
I0225 22:04:30.676] generic-resources.sh:424: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:04:30.845] (Breplicationcontroller/busybox0 created
I0225 22:04:30.851] replicationcontroller/busybox1 created
I0225 22:04:30.954] generic-resources.sh:428: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
I0225 22:04:31.048] (BSuccessful
I0225 22:04:31.049] message:no rollbacker has been implemented for "ReplicationController"
... skipping 4 lines ...
I0225 22:04:31.051] message:no rollbacker has been implemented for "ReplicationController"
I0225 22:04:31.051] no rollbacker has been implemented for "ReplicationController"
I0225 22:04:31.052] unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:04:31.052] has:Object 'Kind' is missing
I0225 22:04:31.151] Successful
I0225 22:04:31.152] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:04:31.152] error: replicationcontrollers "busybox0" pausing is not supported
I0225 22:04:31.152] error: replicationcontrollers "busybox1" pausing is not supported
I0225 22:04:31.152] has:Object 'Kind' is missing
I0225 22:04:31.154] Successful
I0225 22:04:31.154] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:04:31.154] error: replicationcontrollers "busybox0" pausing is not supported
I0225 22:04:31.154] error: replicationcontrollers "busybox1" pausing is not supported
I0225 22:04:31.154] has:replicationcontrollers "busybox0" pausing is not supported
I0225 22:04:31.156] Successful
I0225 22:04:31.157] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:04:31.157] error: replicationcontrollers "busybox0" pausing is not supported
I0225 22:04:31.157] error: replicationcontrollers "busybox1" pausing is not supported
I0225 22:04:31.157] has:replicationcontrollers "busybox1" pausing is not supported
I0225 22:04:31.256] Successful
I0225 22:04:31.257] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:04:31.257] error: replicationcontrollers "busybox0" resuming is not supported
I0225 22:04:31.257] error: replicationcontrollers "busybox1" resuming is not supported
I0225 22:04:31.257] has:Object 'Kind' is missing
I0225 22:04:31.260] Successful
I0225 22:04:31.265] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:04:31.266] error: replicationcontrollers "busybox0" resuming is not supported
I0225 22:04:31.266] error: replicationcontrollers "busybox1" resuming is not supported
I0225 22:04:31.266] has:replicationcontrollers "busybox0" resuming is not supported
I0225 22:04:31.266] Successful
I0225 22:04:31.266] message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:04:31.267] error: replicationcontrollers "busybox0" resuming is not supported
I0225 22:04:31.267] error: replicationcontrollers "busybox1" resuming is not supported
I0225 22:04:31.267] has:replicationcontrollers "busybox0" resuming is not supported
I0225 22:04:31.346] replicationcontroller "busybox0" force deleted
I0225 22:04:31.351] replicationcontroller "busybox1" force deleted
W0225 22:04:31.452] error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
W0225 22:04:31.453] I0225 22:04:30.851817   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132261-27606", Name:"busybox0", UID:"5389765b-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1035", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-dt8bn
W0225 22:04:31.453] I0225 22:04:30.856354   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132261-27606", Name:"busybox1", UID:"538a58a7-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1036", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-5gwl9
W0225 22:04:31.453] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
W0225 22:04:31.454] error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
I0225 22:04:32.358] Recording: run_namespace_tests
I0225 22:04:32.359] Running command: run_namespace_tests
I0225 22:04:32.380] 
I0225 22:04:32.382] +++ Running case: test-cmd.run_namespace_tests 
I0225 22:04:32.384] +++ working dir: /go/src/k8s.io/kubernetes
I0225 22:04:32.386] +++ command: run_namespace_tests
I0225 22:04:32.397] +++ [0225 22:04:32] Testing kubectl(v1:namespaces)
I0225 22:04:32.471] namespace/my-namespace created
I0225 22:04:32.566] core.sh:1321: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
I0225 22:04:32.644] (Bnamespace "my-namespace" deleted
W0225 22:04:33.744] E0225 22:04:33.744208   47447 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"
W0225 22:04:34.366] I0225 22:04:34.365798   47447 controller_utils.go:1021] Waiting for caches to sync for garbage collector controller
W0225 22:04:34.466] I0225 22:04:34.466251   47447 controller_utils.go:1028] Caches are synced for garbage collector controller
I0225 22:04:37.769] namespace/my-namespace condition met
I0225 22:04:37.862] Successful
I0225 22:04:37.862] message:Error from server (NotFound): namespaces "my-namespace" not found
I0225 22:04:37.862] has: not found
I0225 22:04:37.971] core.sh:1336: Successful get namespaces {{range.items}}{{ if eq $id_field \"other\" }}found{{end}}{{end}}:: :
I0225 22:04:38.046] (Bnamespace/other created
I0225 22:04:38.143] core.sh:1340: Successful get namespaces/other {{.metadata.name}}: other
I0225 22:04:38.229] (Bcore.sh:1344: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:04:38.379] (Bpod/valid-pod created
I0225 22:04:38.474] core.sh:1348: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:04:38.560] (Bcore.sh:1350: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:04:38.638] (BSuccessful
I0225 22:04:38.638] message:error: a resource cannot be retrieved by name across all namespaces
I0225 22:04:38.638] has:a resource cannot be retrieved by name across all namespaces
I0225 22:04:38.724] core.sh:1357: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
I0225 22:04:38.799] (Bpod "valid-pod" force deleted
I0225 22:04:38.890] core.sh:1361: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:04:38.963] (Bnamespace "other" deleted
W0225 22:04:39.064] warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 115 lines ...
I0225 22:04:59.360] +++ command: run_client_config_tests
I0225 22:04:59.370] +++ [0225 22:04:59] Creating namespace namespace-1551132299-24253
I0225 22:04:59.437] namespace/namespace-1551132299-24253 created
I0225 22:04:59.500] Context "test" modified.
I0225 22:04:59.506] +++ [0225 22:04:59] Testing client config
I0225 22:04:59.570] Successful
I0225 22:04:59.570] message:error: stat missing: no such file or directory
I0225 22:04:59.570] has:missing: no such file or directory
I0225 22:04:59.631] Successful
I0225 22:04:59.631] message:error: stat missing: no such file or directory
I0225 22:04:59.631] has:missing: no such file or directory
I0225 22:04:59.696] Successful
I0225 22:04:59.697] message:error: stat missing: no such file or directory
I0225 22:04:59.697] has:missing: no such file or directory
I0225 22:04:59.759] Successful
I0225 22:04:59.759] message:Error in configuration: context was not found for specified context: missing-context
I0225 22:04:59.759] has:context was not found for specified context: missing-context
I0225 22:04:59.821] Successful
I0225 22:04:59.821] message:error: no server found for cluster "missing-cluster"
I0225 22:04:59.821] has:no server found for cluster "missing-cluster"
I0225 22:04:59.885] Successful
I0225 22:04:59.885] message:error: auth info "missing-user" does not exist
I0225 22:04:59.885] has:auth info "missing-user" does not exist
I0225 22:05:00.016] Successful
I0225 22:05:00.016] message:error: Error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
I0225 22:05:00.017] has:Error loading config file
I0225 22:05:00.082] Successful
I0225 22:05:00.082] message:error: stat missing-config: no such file or directory
I0225 22:05:00.083] has:no such file or directory
I0225 22:05:00.095] +++ exit code: 0
I0225 22:05:00.140] Recording: run_service_accounts_tests
I0225 22:05:00.140] Running command: run_service_accounts_tests
I0225 22:05:00.158] 
I0225 22:05:00.160] +++ Running case: test-cmd.run_service_accounts_tests 
... skipping 34 lines ...
I0225 22:05:06.839] Labels:                        run=pi
I0225 22:05:06.839] Annotations:                   <none>
I0225 22:05:06.839] Schedule:                      59 23 31 2 *
I0225 22:05:06.839] Concurrency Policy:            Allow
I0225 22:05:06.839] Suspend:                       False
I0225 22:05:06.839] Successful Job History Limit:  824640839704
I0225 22:05:06.840] Failed Job History Limit:      1
I0225 22:05:06.840] Starting Deadline Seconds:     <unset>
I0225 22:05:06.840] Selector:                      <unset>
I0225 22:05:06.840] Parallelism:                   <unset>
I0225 22:05:06.840] Completions:                   <unset>
I0225 22:05:06.840] Pod Template:
I0225 22:05:06.840]   Labels:  run=pi
... skipping 31 lines ...
I0225 22:05:07.352]                 job-name=test-job
I0225 22:05:07.352]                 run=pi
I0225 22:05:07.353] Annotations:    cronjob.kubernetes.io/instantiate: manual
I0225 22:05:07.353] Parallelism:    1
I0225 22:05:07.353] Completions:    1
I0225 22:05:07.353] Start Time:     Mon, 25 Feb 2019 22:05:07 +0000
I0225 22:05:07.353] Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
I0225 22:05:07.353] Pod Template:
I0225 22:05:07.353]   Labels:  controller-uid=692468c6-3949-11e9-a41f-0242ac110002
I0225 22:05:07.353]            job-name=test-job
I0225 22:05:07.353]            run=pi
I0225 22:05:07.353]   Containers:
I0225 22:05:07.353]    pi:
... skipping 390 lines ...
I0225 22:05:19.305]   selector:
I0225 22:05:19.305]     role: padawan
I0225 22:05:19.305]   sessionAffinity: None
I0225 22:05:19.305]   type: ClusterIP
I0225 22:05:19.305] status:
I0225 22:05:19.305]   loadBalancer: {}
W0225 22:05:19.406] error: you must specify resources by --filename when --local is set.
W0225 22:05:19.406] Example resource specifications include:
W0225 22:05:19.406]    '-f rsrc.yaml'
W0225 22:05:19.406]    '--filename=rsrc.json'
I0225 22:05:19.507] core.sh:886: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
I0225 22:05:19.645] (Bcore.sh:893: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
I0225 22:05:19.731] (Bservice "redis-master" deleted
... skipping 33 lines ...
I0225 22:05:22.754] (Bservice/testmetadata created
I0225 22:05:22.754] deployment.apps/testmetadata created
W0225 22:05:22.855] kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
W0225 22:05:22.855] I0225 22:05:22.733719   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"testmetadata", UID:"7275c912-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1224", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set testmetadata-74fbc8d655 to 2
W0225 22:05:22.856] I0225 22:05:22.744594   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-74fbc8d655", UID:"7276c3d4-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1225", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-74fbc8d655-wv9lv
W0225 22:05:22.856] I0225 22:05:22.759901   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"testmetadata-74fbc8d655", UID:"7276c3d4-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1225", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: testmetadata-74fbc8d655-ff5tz
W0225 22:05:22.857] E0225 22:05:22.763353   47447 event.go:247] Could not construct reference to: '&v1.Endpoints{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"testmetadata", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string{}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Subsets:[]v1.EndpointSubset{}}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Warning' 'FailedToCreateEndpoint' 'Failed to create endpoint for service default/testmetadata: endpoints "testmetadata" already exists'
I0225 22:05:22.957] core.sh:1001: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: testmetadata:
I0225 22:05:22.974] (Bcore.sh:1002: Successful get service testmetadata {{.metadata.annotations}}: map[zone-context:home]
I0225 22:05:23.069] (Bservice/exposemetadata exposed
I0225 22:05:23.165] core.sh:1008: Successful get service exposemetadata {{.metadata.annotations}}: map[zone-context:work]
I0225 22:05:23.246] (Bservice "exposemetadata" deleted
I0225 22:05:23.253] service "testmetadata" deleted
... skipping 62 lines ...
I0225 22:05:26.776] (Bapps.sh:80: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0225 22:05:26.869] (Bapps.sh:81: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0225 22:05:26.987] (Bdaemonset.extensions/bind rolled back
I0225 22:05:27.092] apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0225 22:05:27.191] (Bapps.sh:85: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0225 22:05:27.306] (BSuccessful
I0225 22:05:27.306] message:error: unable to find specified revision 1000000 in history
I0225 22:05:27.306] has:unable to find specified revision
I0225 22:05:27.406] apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0225 22:05:27.507] (Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0225 22:05:27.617] (Bdaemonset.extensions/bind rolled back
I0225 22:05:27.719] apps.sh:93: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
I0225 22:05:27.813] (Bapps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 22 lines ...
I0225 22:05:29.190] Namespace:    namespace-1551132328-32669
I0225 22:05:29.190] Selector:     app=guestbook,tier=frontend
I0225 22:05:29.190] Labels:       app=guestbook
I0225 22:05:29.190]               tier=frontend
I0225 22:05:29.190] Annotations:  <none>
I0225 22:05:29.190] Replicas:     3 current / 3 desired
I0225 22:05:29.191] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:29.191] Pod Template:
I0225 22:05:29.191]   Labels:  app=guestbook
I0225 22:05:29.191]            tier=frontend
I0225 22:05:29.191]   Containers:
I0225 22:05:29.191]    php-redis:
I0225 22:05:29.191]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0225 22:05:29.313] Namespace:    namespace-1551132328-32669
I0225 22:05:29.313] Selector:     app=guestbook,tier=frontend
I0225 22:05:29.313] Labels:       app=guestbook
I0225 22:05:29.313]               tier=frontend
I0225 22:05:29.313] Annotations:  <none>
I0225 22:05:29.313] Replicas:     3 current / 3 desired
I0225 22:05:29.313] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:29.313] Pod Template:
I0225 22:05:29.313]   Labels:  app=guestbook
I0225 22:05:29.313]            tier=frontend
I0225 22:05:29.313]   Containers:
I0225 22:05:29.313]    php-redis:
I0225 22:05:29.314]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 10 lines ...
I0225 22:05:29.314]   Type    Reason            Age   From                    Message
I0225 22:05:29.314]   ----    ------            ----  ----                    -------
I0225 22:05:29.315]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-ct8zs
I0225 22:05:29.315]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-pdxrt
I0225 22:05:29.315]   Normal  SuccessfulCreate  1s    replication-controller  Created pod: frontend-qt44z
I0225 22:05:29.315] (B
W0225 22:05:29.419] E0225 22:05:27.650345   47447 daemon_controller.go:302] namespace-1551132325-22526/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1551132325-22526", SelfLink:"/apis/apps/v1/namespaces/namespace-1551132325-22526/daemonsets/bind", UID:"741d5e7e-3949-11e9-a41f-0242ac110002", ResourceVersion:"1290", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63686729125, loc:(*time.Location)(0x6a5f460)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1551132325-22526\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00292d120), Fields:(*v1.Fields)(0xc003a861c8)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00292d260), Fields:(*v1.Fields)(0xc003a86218)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00292da80), Fields:(*v1.Fields)(0xc003a862b0)}, v1.ManagedFieldsEntry{Manager:"kubectl", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00292db20), Fields:(*v1.Fields)(0xc003a862e0)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00292de80), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00351d0d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000f6b500), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc00292df20), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc003a86348)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc00351d150)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
W0225 22:05:29.419] I0225 22:05:28.509496   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"frontend", UID:"75e6ccb0-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1299", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-7h6xm
W0225 22:05:29.419] I0225 22:05:28.514479   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"frontend", UID:"75e6ccb0-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1299", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-s6gjt
W0225 22:05:29.420] I0225 22:05:28.515248   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"frontend", UID:"75e6ccb0-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1299", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-chf8s
W0225 22:05:29.420] I0225 22:05:28.937062   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"frontend", UID:"76288670-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1315", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-ct8zs
W0225 22:05:29.420] I0225 22:05:28.942134   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"frontend", UID:"76288670-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1315", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-pdxrt
W0225 22:05:29.420] I0225 22:05:28.943226   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"frontend", UID:"76288670-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1315", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-qt44z
... skipping 2 lines ...
I0225 22:05:29.521] Namespace:    namespace-1551132328-32669
I0225 22:05:29.521] Selector:     app=guestbook,tier=frontend
I0225 22:05:29.522] Labels:       app=guestbook
I0225 22:05:29.522]               tier=frontend
I0225 22:05:29.522] Annotations:  <none>
I0225 22:05:29.522] Replicas:     3 current / 3 desired
I0225 22:05:29.522] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:29.522] Pod Template:
I0225 22:05:29.522]   Labels:  app=guestbook
I0225 22:05:29.522]            tier=frontend
I0225 22:05:29.522]   Containers:
I0225 22:05:29.522]    php-redis:
I0225 22:05:29.522]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
I0225 22:05:29.539] Namespace:    namespace-1551132328-32669
I0225 22:05:29.539] Selector:     app=guestbook,tier=frontend
I0225 22:05:29.539] Labels:       app=guestbook
I0225 22:05:29.539]               tier=frontend
I0225 22:05:29.539] Annotations:  <none>
I0225 22:05:29.540] Replicas:     3 current / 3 desired
I0225 22:05:29.540] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:29.540] Pod Template:
I0225 22:05:29.540]   Labels:  app=guestbook
I0225 22:05:29.540]            tier=frontend
I0225 22:05:29.540]   Containers:
I0225 22:05:29.540]    php-redis:
I0225 22:05:29.540]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
I0225 22:05:29.679] Namespace:    namespace-1551132328-32669
I0225 22:05:29.679] Selector:     app=guestbook,tier=frontend
I0225 22:05:29.679] Labels:       app=guestbook
I0225 22:05:29.679]               tier=frontend
I0225 22:05:29.679] Annotations:  <none>
I0225 22:05:29.679] Replicas:     3 current / 3 desired
I0225 22:05:29.679] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:29.679] Pod Template:
I0225 22:05:29.679]   Labels:  app=guestbook
I0225 22:05:29.680]            tier=frontend
I0225 22:05:29.680]   Containers:
I0225 22:05:29.680]    php-redis:
I0225 22:05:29.680]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0225 22:05:29.789] Namespace:    namespace-1551132328-32669
I0225 22:05:29.789] Selector:     app=guestbook,tier=frontend
I0225 22:05:29.789] Labels:       app=guestbook
I0225 22:05:29.789]               tier=frontend
I0225 22:05:29.789] Annotations:  <none>
I0225 22:05:29.789] Replicas:     3 current / 3 desired
I0225 22:05:29.789] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:29.789] Pod Template:
I0225 22:05:29.789]   Labels:  app=guestbook
I0225 22:05:29.789]            tier=frontend
I0225 22:05:29.789]   Containers:
I0225 22:05:29.789]    php-redis:
I0225 22:05:29.790]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
I0225 22:05:29.893] Namespace:    namespace-1551132328-32669
I0225 22:05:29.893] Selector:     app=guestbook,tier=frontend
I0225 22:05:29.893] Labels:       app=guestbook
I0225 22:05:29.894]               tier=frontend
I0225 22:05:29.894] Annotations:  <none>
I0225 22:05:29.894] Replicas:     3 current / 3 desired
I0225 22:05:29.894] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:29.894] Pod Template:
I0225 22:05:29.894]   Labels:  app=guestbook
I0225 22:05:29.894]            tier=frontend
I0225 22:05:29.894]   Containers:
I0225 22:05:29.894]    php-redis:
I0225 22:05:29.895]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
I0225 22:05:30.006] Namespace:    namespace-1551132328-32669
I0225 22:05:30.006] Selector:     app=guestbook,tier=frontend
I0225 22:05:30.006] Labels:       app=guestbook
I0225 22:05:30.006]               tier=frontend
I0225 22:05:30.006] Annotations:  <none>
I0225 22:05:30.006] Replicas:     3 current / 3 desired
I0225 22:05:30.006] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:30.007] Pod Template:
I0225 22:05:30.007]   Labels:  app=guestbook
I0225 22:05:30.007]            tier=frontend
I0225 22:05:30.007]   Containers:
I0225 22:05:30.007]    php-redis:
I0225 22:05:30.007]     Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 22 lines ...
I0225 22:05:30.849] core.sh:1087: Successful get rc frontend {{.spec.replicas}}: 3
I0225 22:05:30.935] (Bcore.sh:1091: Successful get rc frontend {{.spec.replicas}}: 3
I0225 22:05:31.025] (Breplicationcontroller/frontend scaled
I0225 22:05:31.130] core.sh:1095: Successful get rc frontend {{.spec.replicas}}: 2
I0225 22:05:31.208] (Breplicationcontroller "frontend" deleted
W0225 22:05:31.309] I0225 22:05:30.206012   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"frontend", UID:"76288670-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1325", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-ct8zs
W0225 22:05:31.310] error: Expected replicas to be 3, was 2
W0225 22:05:31.310] I0225 22:05:30.754798   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"frontend", UID:"76288670-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1331", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jbk6s
W0225 22:05:31.310] I0225 22:05:31.033740   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"frontend", UID:"76288670-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1336", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-jbk6s
W0225 22:05:31.392] I0225 22:05:31.391520   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"redis-master", UID:"779ef350-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1347", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-d2z7q
I0225 22:05:31.492] replicationcontroller/redis-master created
I0225 22:05:31.553] replicationcontroller/redis-slave created
W0225 22:05:31.653] I0225 22:05:31.559943   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"redis-slave", UID:"77b889c7-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1352", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-5rwcf
... skipping 36 lines ...
I0225 22:05:33.252] service "expose-test-deployment" deleted
I0225 22:05:33.354] Successful
I0225 22:05:33.354] message:service/expose-test-deployment exposed
I0225 22:05:33.354] has:service/expose-test-deployment exposed
I0225 22:05:33.435] service "expose-test-deployment" deleted
I0225 22:05:33.526] Successful
I0225 22:05:33.527] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0225 22:05:33.527] See 'kubectl expose -h' for help and examples
I0225 22:05:33.527] has:invalid deployment: no selectors
I0225 22:05:33.616] Successful
I0225 22:05:33.616] message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
I0225 22:05:33.616] See 'kubectl expose -h' for help and examples
I0225 22:05:33.616] has:invalid deployment: no selectors
I0225 22:05:33.785] deployment.apps/nginx-deployment created
W0225 22:05:33.886] I0225 22:05:33.790060   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132328-32669", Name:"nginx-deployment", UID:"790cea54-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1452", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-64bb598779 to 3
W0225 22:05:33.886] I0225 22:05:33.795832   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132328-32669", Name:"nginx-deployment-64bb598779", UID:"790dee02-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1453", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-c7qr6
W0225 22:05:33.887] I0225 22:05:33.799668   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132328-32669", Name:"nginx-deployment-64bb598779", UID:"790dee02-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1453", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-64bb598779-g4pcn
... skipping 20 lines ...
I0225 22:05:35.792] service "frontend" deleted
I0225 22:05:35.801] service "frontend-2" deleted
I0225 22:05:35.809] service "frontend-3" deleted
I0225 22:05:35.817] service "frontend-4" deleted
I0225 22:05:35.824] service "frontend-5" deleted
I0225 22:05:35.925] Successful
I0225 22:05:35.925] message:error: cannot expose a Node
I0225 22:05:35.925] has:cannot expose
I0225 22:05:36.014] Successful
I0225 22:05:36.014] message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
I0225 22:05:36.014] has:metadata.name: Invalid value
I0225 22:05:36.105] Successful
I0225 22:05:36.105] message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 33 lines ...
I0225 22:05:38.209] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0225 22:05:38.293] core.sh:1263: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0225 22:05:38.367] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0225 22:05:38.468] I0225 22:05:37.806738   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"frontend", UID:"7b72201e-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1572", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hff6q
W0225 22:05:38.468] I0225 22:05:37.812198   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"frontend", UID:"7b72201e-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1572", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hlp2x
W0225 22:05:38.468] I0225 22:05:37.812933   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132328-32669", Name:"frontend", UID:"7b72201e-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"1572", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-smfcs
W0225 22:05:38.468] Error: required flag(s) "max" not set
W0225 22:05:38.468] 
W0225 22:05:38.468] 
W0225 22:05:38.469] Examples:
W0225 22:05:38.469]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0225 22:05:38.469]   kubectl autoscale deployment foo --min=2 --max=10
W0225 22:05:38.469]   
... skipping 54 lines ...
I0225 22:05:38.664]           limits:
I0225 22:05:38.664]             cpu: 300m
I0225 22:05:38.664]           requests:
I0225 22:05:38.664]             cpu: 300m
I0225 22:05:38.664]       terminationGracePeriodSeconds: 0
I0225 22:05:38.664] status: {}
W0225 22:05:38.765] Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
I0225 22:05:38.894] deployment.apps/nginx-deployment-resources created
I0225 22:05:38.997] core.sh:1278: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
I0225 22:05:39.079] (Bcore.sh:1279: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0225 22:05:39.162] (Bcore.sh:1280: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0225 22:05:39.253] (Bdeployment.extensions/nginx-deployment-resources resource requirements updated
I0225 22:05:39.359] core.sh:1283: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
... skipping 2 lines ...
W0225 22:05:39.720] I0225 22:05:38.898877   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132328-32669", Name:"nginx-deployment-resources", UID:"7c18858b-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1593", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-695c766d58 to 3
W0225 22:05:39.721] I0225 22:05:38.903829   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132328-32669", Name:"nginx-deployment-resources-695c766d58", UID:"7c1972ef-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1594", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-t8dqs
W0225 22:05:39.721] I0225 22:05:38.908145   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132328-32669", Name:"nginx-deployment-resources-695c766d58", UID:"7c1972ef-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1594", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-2mv7x
W0225 22:05:39.722] I0225 22:05:38.912729   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132328-32669", Name:"nginx-deployment-resources-695c766d58", UID:"7c1972ef-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1594", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-695c766d58-2jt2q
W0225 22:05:39.722] I0225 22:05:39.259418   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132328-32669", Name:"nginx-deployment-resources", UID:"7c18858b-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1607", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-5b7fc6dd8b to 1
W0225 22:05:39.722] I0225 22:05:39.264291   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132328-32669", Name:"nginx-deployment-resources-5b7fc6dd8b", UID:"7c507b26-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1608", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-5b7fc6dd8b-2x6bl
W0225 22:05:39.722] error: unable to find container named redis
W0225 22:05:39.723] I0225 22:05:39.643034   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132328-32669", Name:"nginx-deployment-resources", UID:"7c18858b-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1617", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-695c766d58 to 2
W0225 22:05:39.723] I0225 22:05:39.650222   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132328-32669", Name:"nginx-deployment-resources-695c766d58", UID:"7c1972ef-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1621", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-695c766d58-t8dqs
W0225 22:05:39.723] I0225 22:05:39.667237   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132328-32669", Name:"nginx-deployment-resources", UID:"7c18858b-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1620", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6bc4567bf6 to 1
W0225 22:05:39.724] I0225 22:05:39.673119   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132328-32669", Name:"nginx-deployment-resources-6bc4567bf6", UID:"7c886965-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1627", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6bc4567bf6-v8qms
I0225 22:05:39.824] core.sh:1289: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0225 22:05:39.825] (Bcore.sh:1290: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
... skipping 211 lines ...
I0225 22:05:40.281]     status: "True"
I0225 22:05:40.281]     type: Progressing
I0225 22:05:40.281]   observedGeneration: 4
I0225 22:05:40.281]   replicas: 4
I0225 22:05:40.281]   unavailableReplicas: 4
I0225 22:05:40.281]   updatedReplicas: 1
W0225 22:05:40.382] error: you must specify resources by --filename when --local is set.
W0225 22:05:40.382] Example resource specifications include:
W0225 22:05:40.382]    '-f rsrc.yaml'
W0225 22:05:40.382]    '--filename=rsrc.json'
I0225 22:05:40.483] core.sh:1299: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
I0225 22:05:40.498] (Bcore.sh:1300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
I0225 22:05:40.579] (Bcore.sh:1301: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 44 lines ...
I0225 22:05:41.998]                 pod-template-hash=7875bf5c8b
I0225 22:05:41.998] Annotations:    deployment.kubernetes.io/desired-replicas: 1
I0225 22:05:41.998]                 deployment.kubernetes.io/max-replicas: 2
I0225 22:05:41.999]                 deployment.kubernetes.io/revision: 1
I0225 22:05:41.999] Controlled By:  Deployment/test-nginx-apps
I0225 22:05:41.999] Replicas:       1 current / 1 desired
I0225 22:05:41.999] Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:41.999] Pod Template:
I0225 22:05:41.999]   Labels:  app=test-nginx-apps
I0225 22:05:41.999]            pod-template-hash=7875bf5c8b
I0225 22:05:41.999]   Containers:
I0225 22:05:41.999]    nginx:
I0225 22:05:42.000]     Image:        k8s.gcr.io/nginx:test-cmd
... skipping 91 lines ...
W0225 22:05:46.135] Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
W0225 22:05:46.136] I0225 22:05:45.627775   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132340-2523", Name:"nginx", UID:"7fca0707-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1807", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6458c7c55b to 1
W0225 22:05:46.136] I0225 22:05:45.632112   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132340-2523", Name:"nginx-6458c7c55b", UID:"801c3ef0-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1808", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6458c7c55b-pkdjv
I0225 22:05:47.134] apps.sh:300: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0225 22:05:47.309] (Bapps.sh:303: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0225 22:05:47.409] (Bdeployment.extensions/nginx rolled back
W0225 22:05:47.510] error: unable to find specified revision 1000000 in history
I0225 22:05:48.493] apps.sh:307: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0225 22:05:48.582] (Bdeployment.extensions/nginx paused
W0225 22:05:48.682] error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0225 22:05:48.783] deployment.extensions/nginx resumed
I0225 22:05:48.891] deployment.extensions/nginx rolled back
I0225 22:05:49.062]     deployment.kubernetes.io/revision-history: 1,3
W0225 22:05:49.237] error: desired revision (3) is different from the running revision (5)
I0225 22:05:49.421] deployment.apps/nginx2 created
I0225 22:05:49.507] deployment.extensions "nginx2" deleted
I0225 22:05:49.585] deployment.extensions "nginx" deleted
I0225 22:05:49.672] apps.sh:329: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:05:49.820] (Bdeployment.apps/nginx-deployment created
W0225 22:05:49.920] I0225 22:05:49.426448   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132340-2523", Name:"nginx2", UID:"825f23d3-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1838", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx2-78cb9c866 to 3
... skipping 13 lines ...
I0225 22:05:50.529] (Bdeployment.extensions/nginx-deployment image updated
I0225 22:05:50.616] apps.sh:343: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
I0225 22:05:50.707] (Bapps.sh:344: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0225 22:05:50.798] (Bdeployment.apps/nginx-deployment image updated
W0225 22:05:50.898] I0225 22:05:50.185179   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132340-2523", Name:"nginx-deployment", UID:"829be645-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1886", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-5bfd55c857 to 1
W0225 22:05:50.899] I0225 22:05:50.189296   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132340-2523", Name:"nginx-deployment-5bfd55c857", UID:"82d3a4fc-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1887", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-5bfd55c857-rcmwj
W0225 22:05:50.899] error: unable to find container named "redis"
I0225 22:05:50.999] apps.sh:347: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0225 22:05:51.012] (Bapps.sh:348: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0225 22:05:51.177] (Bapps.sh:351: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
I0225 22:05:51.256] (Bapps.sh:352: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
I0225 22:05:51.346] (Bdeployment.extensions/nginx-deployment image updated
W0225 22:05:51.447] I0225 22:05:51.374019   47447 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1551132340-2523", Name:"nginx-deployment", UID:"829be645-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"1903", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-79b6f6d8f5 to 2
... skipping 85 lines ...
I0225 22:05:56.156] Namespace:    namespace-1551132354-2373
I0225 22:05:56.156] Selector:     app=guestbook,tier=frontend
I0225 22:05:56.156] Labels:       app=guestbook
I0225 22:05:56.156]               tier=frontend
I0225 22:05:56.156] Annotations:  <none>
I0225 22:05:56.156] Replicas:     3 current / 3 desired
I0225 22:05:56.157] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:56.157] Pod Template:
I0225 22:05:56.157]   Labels:  app=guestbook
I0225 22:05:56.157]            tier=frontend
I0225 22:05:56.157]   Containers:
I0225 22:05:56.157]    php-redis:
I0225 22:05:56.157]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0225 22:05:56.251] Namespace:    namespace-1551132354-2373
I0225 22:05:56.251] Selector:     app=guestbook,tier=frontend
I0225 22:05:56.251] Labels:       app=guestbook
I0225 22:05:56.251]               tier=frontend
I0225 22:05:56.252] Annotations:  <none>
I0225 22:05:56.252] Replicas:     3 current / 3 desired
I0225 22:05:56.252] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:56.252] Pod Template:
I0225 22:05:56.252]   Labels:  app=guestbook
I0225 22:05:56.252]            tier=frontend
I0225 22:05:56.252]   Containers:
I0225 22:05:56.252]    php-redis:
I0225 22:05:56.252]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
I0225 22:05:56.349] Namespace:    namespace-1551132354-2373
I0225 22:05:56.349] Selector:     app=guestbook,tier=frontend
I0225 22:05:56.349] Labels:       app=guestbook
I0225 22:05:56.349]               tier=frontend
I0225 22:05:56.349] Annotations:  <none>
I0225 22:05:56.349] Replicas:     3 current / 3 desired
I0225 22:05:56.349] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:56.350] Pod Template:
I0225 22:05:56.350]   Labels:  app=guestbook
I0225 22:05:56.350]            tier=frontend
I0225 22:05:56.350]   Containers:
I0225 22:05:56.350]    php-redis:
I0225 22:05:56.350]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
I0225 22:05:56.444] Namespace:    namespace-1551132354-2373
I0225 22:05:56.444] Selector:     app=guestbook,tier=frontend
I0225 22:05:56.444] Labels:       app=guestbook
I0225 22:05:56.444]               tier=frontend
I0225 22:05:56.444] Annotations:  <none>
I0225 22:05:56.445] Replicas:     3 current / 3 desired
I0225 22:05:56.445] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:56.445] Pod Template:
I0225 22:05:56.445]   Labels:  app=guestbook
I0225 22:05:56.445]            tier=frontend
I0225 22:05:56.445]   Containers:
I0225 22:05:56.445]    php-redis:
I0225 22:05:56.445]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 10 lines ...
I0225 22:05:56.447]   Type    Reason            Age   From                   Message
I0225 22:05:56.447]   ----    ------            ----  ----                   -------
I0225 22:05:56.447]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-zxqbj
I0225 22:05:56.447]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-jcrlq
I0225 22:05:56.447]   Normal  SuccessfulCreate  1s    replicaset-controller  Created pod: frontend-gzghm
I0225 22:05:56.447] (B
W0225 22:05:56.548] E0225 22:05:54.212618   47447 replica_set.go:450] Sync "namespace-1551132340-2523/nginx-deployment-54979c5b5c" failed with replicasets.apps "nginx-deployment-54979c5b5c" not found
W0225 22:05:56.548] I0225 22:05:54.845007   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132354-2373", Name:"frontend", UID:"8599c86f-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2068", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-99h6q
W0225 22:05:56.549] I0225 22:05:54.849270   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132354-2373", Name:"frontend", UID:"8599c86f-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2068", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hwf4b
W0225 22:05:56.549] I0225 22:05:54.849400   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132354-2373", Name:"frontend", UID:"8599c86f-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2068", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-9pgw6
W0225 22:05:56.549] I0225 22:05:55.222479   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132354-2373", Name:"frontend-no-cascade", UID:"85d3a6d2-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2084", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-6vrnx
W0225 22:05:56.549] I0225 22:05:55.227048   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132354-2373", Name:"frontend-no-cascade", UID:"85d3a6d2-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2084", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-6mv4f
W0225 22:05:56.550] I0225 22:05:55.227288   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132354-2373", Name:"frontend-no-cascade", UID:"85d3a6d2-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2084", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-no-cascade-ztrbg
W0225 22:05:56.550] E0225 22:05:55.411599   47447 replica_set.go:450] Sync "namespace-1551132354-2373/frontend-no-cascade" failed with replicasets.apps "frontend-no-cascade" not found
W0225 22:05:56.550] I0225 22:05:55.953200   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132354-2373", Name:"frontend", UID:"8642ebea-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2106", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-zxqbj
W0225 22:05:56.550] I0225 22:05:55.957325   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132354-2373", Name:"frontend", UID:"8642ebea-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2106", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-jcrlq
W0225 22:05:56.551] I0225 22:05:55.957965   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1551132354-2373", Name:"frontend", UID:"8642ebea-3949-11e9-a41f-0242ac110002", APIVersion:"apps/v1", ResourceVersion:"2106", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-gzghm
I0225 22:05:56.651] Successful describe rs:
I0225 22:05:56.651] Name:         frontend
I0225 22:05:56.652] Namespace:    namespace-1551132354-2373
I0225 22:05:56.652] Selector:     app=guestbook,tier=frontend
I0225 22:05:56.652] Labels:       app=guestbook
I0225 22:05:56.652]               tier=frontend
I0225 22:05:56.652] Annotations:  <none>
I0225 22:05:56.652] Replicas:     3 current / 3 desired
I0225 22:05:56.653] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:56.653] Pod Template:
I0225 22:05:56.653]   Labels:  app=guestbook
I0225 22:05:56.653]            tier=frontend
I0225 22:05:56.653]   Containers:
I0225 22:05:56.653]    php-redis:
I0225 22:05:56.653]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0225 22:05:56.663] Namespace:    namespace-1551132354-2373
I0225 22:05:56.663] Selector:     app=guestbook,tier=frontend
I0225 22:05:56.663] Labels:       app=guestbook
I0225 22:05:56.663]               tier=frontend
I0225 22:05:56.663] Annotations:  <none>
I0225 22:05:56.663] Replicas:     3 current / 3 desired
I0225 22:05:56.664] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:56.664] Pod Template:
I0225 22:05:56.664]   Labels:  app=guestbook
I0225 22:05:56.664]            tier=frontend
I0225 22:05:56.664]   Containers:
I0225 22:05:56.664]    php-redis:
I0225 22:05:56.664]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
I0225 22:05:56.757] Namespace:    namespace-1551132354-2373
I0225 22:05:56.757] Selector:     app=guestbook,tier=frontend
I0225 22:05:56.758] Labels:       app=guestbook
I0225 22:05:56.758]               tier=frontend
I0225 22:05:56.758] Annotations:  <none>
I0225 22:05:56.758] Replicas:     3 current / 3 desired
I0225 22:05:56.758] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:56.758] Pod Template:
I0225 22:05:56.758]   Labels:  app=guestbook
I0225 22:05:56.758]            tier=frontend
I0225 22:05:56.758]   Containers:
I0225 22:05:56.759]    php-redis:
I0225 22:05:56.759]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
I0225 22:05:56.857] Namespace:    namespace-1551132354-2373
I0225 22:05:56.857] Selector:     app=guestbook,tier=frontend
I0225 22:05:56.857] Labels:       app=guestbook
I0225 22:05:56.857]               tier=frontend
I0225 22:05:56.858] Annotations:  <none>
I0225 22:05:56.858] Replicas:     3 current / 3 desired
I0225 22:05:56.858] Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
I0225 22:05:56.858] Pod Template:
I0225 22:05:56.858]   Labels:  app=guestbook
I0225 22:05:56.858]            tier=frontend
I0225 22:05:56.858]   Containers:
I0225 22:05:56.858]    php-redis:
I0225 22:05:56.858]     Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 184 lines ...
I0225 22:06:01.913] (Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
I0225 22:06:02.011] apps.sh:643: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
I0225 22:06:02.088] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
I0225 22:06:02.189] horizontalpodautoscaler.autoscaling/frontend autoscaled
I0225 22:06:02.274] apps.sh:647: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
I0225 22:06:02.343] (Bhorizontalpodautoscaler.autoscaling "frontend" deleted
W0225 22:06:02.443] Error: required flag(s) "max" not set
W0225 22:06:02.443] 
W0225 22:06:02.443] 
W0225 22:06:02.444] Examples:
W0225 22:06:02.444]   # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used:
W0225 22:06:02.444]   kubectl autoscale deployment foo --min=2 --max=10
W0225 22:06:02.444]   
... skipping 88 lines ...
I0225 22:06:05.438] (Bapps.sh:431: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
I0225 22:06:05.519] (Bapps.sh:432: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
I0225 22:06:05.615] (Bstatefulset.apps/nginx rolled back
I0225 22:06:05.704] apps.sh:435: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0225 22:06:05.792] (Bapps.sh:436: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0225 22:06:05.889] (BSuccessful
I0225 22:06:05.890] message:error: unable to find specified revision 1000000 in history
I0225 22:06:05.890] has:unable to find specified revision
I0225 22:06:05.981] apps.sh:440: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
I0225 22:06:06.064] (Bapps.sh:441: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
I0225 22:06:06.169] (Bstatefulset.apps/nginx rolled back
I0225 22:06:06.258] apps.sh:444: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
I0225 22:06:06.343] (Bapps.sh:445: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
I0225 22:06:08.376] Name:         mock
I0225 22:06:08.376] Namespace:    namespace-1551132367-12057
I0225 22:06:08.377] Selector:     app=mock
I0225 22:06:08.377] Labels:       app=mock
I0225 22:06:08.377] Annotations:  <none>
I0225 22:06:08.377] Replicas:     1 current / 1 desired
I0225 22:06:08.377] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0225 22:06:08.377] Pod Template:
I0225 22:06:08.377]   Labels:  app=mock
I0225 22:06:08.377]   Containers:
I0225 22:06:08.377]    mock-container:
I0225 22:06:08.378]     Image:        k8s.gcr.io/pause:2.0
I0225 22:06:08.378]     Port:         9949/TCP
... skipping 56 lines ...
I0225 22:06:10.932] Name:         mock
I0225 22:06:10.933] Namespace:    namespace-1551132367-12057
I0225 22:06:10.933] Selector:     app=mock
I0225 22:06:10.933] Labels:       app=mock
I0225 22:06:10.933] Annotations:  <none>
I0225 22:06:10.933] Replicas:     1 current / 1 desired
I0225 22:06:10.933] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0225 22:06:10.933] Pod Template:
I0225 22:06:10.933]   Labels:  app=mock
I0225 22:06:10.933]   Containers:
I0225 22:06:10.934]    mock-container:
I0225 22:06:10.934]     Image:        k8s.gcr.io/pause:2.0
I0225 22:06:10.934]     Port:         9949/TCP
... skipping 56 lines ...
I0225 22:06:13.130] Name:         mock
I0225 22:06:13.130] Namespace:    namespace-1551132367-12057
I0225 22:06:13.130] Selector:     app=mock
I0225 22:06:13.130] Labels:       app=mock
I0225 22:06:13.130] Annotations:  <none>
I0225 22:06:13.130] Replicas:     1 current / 1 desired
I0225 22:06:13.131] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0225 22:06:13.131] Pod Template:
I0225 22:06:13.131]   Labels:  app=mock
I0225 22:06:13.131]   Containers:
I0225 22:06:13.131]    mock-container:
I0225 22:06:13.131]     Image:        k8s.gcr.io/pause:2.0
I0225 22:06:13.131]     Port:         9949/TCP
... skipping 42 lines ...
I0225 22:06:15.424] Namespace:    namespace-1551132367-12057
I0225 22:06:15.424] Selector:     app=mock
I0225 22:06:15.424] Labels:       app=mock
I0225 22:06:15.424]               status=replaced
I0225 22:06:15.424] Annotations:  <none>
I0225 22:06:15.424] Replicas:     1 current / 1 desired
I0225 22:06:15.425] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0225 22:06:15.425] Pod Template:
I0225 22:06:15.425]   Labels:  app=mock
I0225 22:06:15.425]   Containers:
I0225 22:06:15.425]    mock-container:
I0225 22:06:15.425]     Image:        k8s.gcr.io/pause:2.0
I0225 22:06:15.425]     Port:         9949/TCP
... skipping 11 lines ...
I0225 22:06:15.428] Namespace:    namespace-1551132367-12057
I0225 22:06:15.428] Selector:     app=mock2
I0225 22:06:15.428] Labels:       app=mock2
I0225 22:06:15.429]               status=replaced
I0225 22:06:15.429] Annotations:  <none>
I0225 22:06:15.429] Replicas:     1 current / 1 desired
I0225 22:06:15.429] Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
I0225 22:06:15.429] Pod Template:
I0225 22:06:15.429]   Labels:  app=mock2
I0225 22:06:15.429]   Containers:
I0225 22:06:15.429]    mock-container:
I0225 22:06:15.430]     Image:        k8s.gcr.io/pause:2.0
I0225 22:06:15.430]     Port:         9949/TCP
... skipping 107 lines ...
I0225 22:06:20.542] +++ [0225 22:06:20] Testing persistent volumes
I0225 22:06:20.623] storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:06:20.775] (Bpersistentvolume/pv0001 created
I0225 22:06:20.868] storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
I0225 22:06:20.943] (Bpersistentvolume "pv0001" deleted
W0225 22:06:21.043] I0225 22:06:19.732933   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132367-12057", Name:"mock", UID:"946fa48c-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"2566", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: mock-p2npx
W0225 22:06:21.095] E0225 22:06:21.095141   47447 pv_protection_controller.go:116] PV pv0002 failed with : Operation cannot be fulfilled on persistentvolumes "pv0002": the object has been modified; please apply your changes to the latest version and try again
I0225 22:06:21.196] persistentvolume/pv0002 created
I0225 22:06:21.196] storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
I0225 22:06:21.259] (Bpersistentvolume "pv0002" deleted
I0225 22:06:21.415] persistentvolume/pv0003 created
I0225 22:06:21.509] storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
I0225 22:06:21.581] (Bpersistentvolume "pv0003" deleted
... skipping 10 lines ...
I0225 22:06:21.925] Context "test" modified.
I0225 22:06:21.931] +++ [0225 22:06:21] Testing persistent volumes claims
I0225 22:06:22.012] storage.sh:57: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: 
I0225 22:06:22.174] (Bpersistentvolumeclaim/myclaim-1 created
I0225 22:06:22.266] storage.sh:60: Successful get pvc {{range.items}}{{.metadata.name}}:{{end}}: myclaim-1:
I0225 22:06:22.344] (Bpersistentvolumeclaim "myclaim-1" deleted
W0225 22:06:22.445] E0225 22:06:21.419459   47447 pv_protection_controller.go:116] PV pv0003 failed with : Operation cannot be fulfilled on persistentvolumes "pv0003": the object has been modified; please apply your changes to the latest version and try again
W0225 22:06:22.445] I0225 22:06:22.175303   47447 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"namespace-1551132381-4761", Name:"myclaim-1", UID:"95e4ef15-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"2597", FieldPath:""}): type: 'Normal' reason: 'FailedBinding' no persistent volumes available for this claim and no storage class is set
W0225 22:06:22.446] I0225 22:06:22.178985   47447 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"namespace-1551132381-4761", Name:"myclaim-1", UID:"95e4ef15-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"2599", FieldPath:""}): type: 'Normal' reason: 'FailedBinding' no persistent volumes available for this claim and no storage class is set
W0225 22:06:22.446] I0225 22:06:22.344235   47447 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"namespace-1551132381-4761", Name:"myclaim-1", UID:"95e4ef15-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"2602", FieldPath:""}): type: 'Normal' reason: 'FailedBinding' no persistent volumes available for this claim and no storage class is set
W0225 22:06:22.496] I0225 22:06:22.495602   47447 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"namespace-1551132381-4761", Name:"myclaim-2", UID:"9615fbfa-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"2605", FieldPath:""}): type: 'Normal' reason: 'FailedBinding' no persistent volumes available for this claim and no storage class is set
W0225 22:06:22.500] I0225 22:06:22.499640   47447 event.go:209] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"namespace-1551132381-4761", Name:"myclaim-2", UID:"9615fbfa-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"2607", FieldPath:""}): type: 'Normal' reason: 'FailedBinding' no persistent volumes available for this claim and no storage class is set
I0225 22:06:22.600] persistentvolumeclaim/myclaim-2 created
... skipping 466 lines ...
I0225 22:06:26.295] yes
I0225 22:06:26.295] has:the server doesn't have a resource type
I0225 22:06:26.370] Successful
I0225 22:06:26.370] message:yes
I0225 22:06:26.370] has:yes
I0225 22:06:26.438] Successful
I0225 22:06:26.438] message:error: --subresource can not be used with NonResourceURL
I0225 22:06:26.438] has:subresource can not be used with NonResourceURL
I0225 22:06:26.513] Successful
I0225 22:06:26.591] Successful
I0225 22:06:26.592] message:yes
I0225 22:06:26.592] 0
I0225 22:06:26.592] has:0
... skipping 6 lines ...
I0225 22:06:26.773] role.rbac.authorization.k8s.io/testing-R reconciled
I0225 22:06:26.861] legacy-script.sh:763: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
I0225 22:06:26.945] (Blegacy-script.sh:764: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
I0225 22:06:27.031] (Blegacy-script.sh:765: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
I0225 22:06:27.126] (Blegacy-script.sh:766: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
I0225 22:06:27.201] (BSuccessful
I0225 22:06:27.201] message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
I0225 22:06:27.202] has:only rbac.authorization.k8s.io/v1 is supported
I0225 22:06:27.285] rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
I0225 22:06:27.290] role.rbac.authorization.k8s.io "testing-R" deleted
I0225 22:06:27.300] clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
I0225 22:06:27.310] clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
I0225 22:06:27.321] Recording: run_retrieve_multiple_tests
... skipping 44 lines ...
I0225 22:06:28.388] +++ Running case: test-cmd.run_kubectl_explain_tests 
I0225 22:06:28.390] +++ working dir: /go/src/k8s.io/kubernetes
I0225 22:06:28.392] +++ command: run_kubectl_explain_tests
I0225 22:06:28.399] +++ [0225 22:06:28] Testing kubectl(v1:explain)
W0225 22:06:28.500] I0225 22:06:28.278661   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132387-17274", Name:"cassandra", UID:"994c9a97-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"2646", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-jt7kz
W0225 22:06:28.500] I0225 22:06:28.295266   47447 event.go:209] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1551132387-17274", Name:"cassandra", UID:"994c9a97-3949-11e9-a41f-0242ac110002", APIVersion:"v1", ResourceVersion:"2646", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-6md8c
W0225 22:06:28.501] E0225 22:06:28.301843   47447 replica_set.go:450] Sync "namespace-1551132387-17274/cassandra" failed with replicationcontrollers "cassandra" not found
I0225 22:06:28.601] KIND:     Pod
I0225 22:06:28.601] VERSION:  v1
I0225 22:06:28.601] 
I0225 22:06:28.601] DESCRIPTION:
I0225 22:06:28.602]      Pod is a collection of containers that can run on a host. This resource is
I0225 22:06:28.602]      created by clients and scheduled onto hosts.
... skipping 1101 lines ...
I0225 22:06:54.228] message:node/127.0.0.1 already uncordoned (dry run)
I0225 22:06:54.228] has:already uncordoned
I0225 22:06:54.326] node-management.sh:119: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
I0225 22:06:54.420] (Bnode/127.0.0.1 labeled
I0225 22:06:54.509] node-management.sh:124: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
I0225 22:06:54.581] (BSuccessful
I0225 22:06:54.581] message:error: cannot specify both a node name and a --selector option
I0225 22:06:54.581] See 'kubectl drain -h' for help and examples
I0225 22:06:54.581] has:cannot specify both a node name
I0225 22:06:54.645] Successful
I0225 22:06:54.645] message:error: USAGE: cordon NODE [flags]
I0225 22:06:54.645] See 'kubectl cordon -h' for help and examples
I0225 22:06:54.646] has:error\: USAGE\: cordon NODE
I0225 22:06:54.717] node/127.0.0.1 already uncordoned
I0225 22:06:54.789] Successful
I0225 22:06:54.790] message:error: You must provide one or more resources by argument or filename.
I0225 22:06:54.790] Example resource specifications include:
I0225 22:06:54.790]    '-f rsrc.yaml'
I0225 22:06:54.790]    '--filename=rsrc.json'
I0225 22:06:54.790]    '<resource> <name>'
I0225 22:06:54.790]    '<resource>'
I0225 22:06:54.790] has:must provide one or more resources
... skipping 15 lines ...
I0225 22:06:55.224] Successful
I0225 22:06:55.225] message:The following compatible plugins are available:
I0225 22:06:55.225] 
I0225 22:06:55.225] test/fixtures/pkg/kubectl/plugins/version/kubectl-version
I0225 22:06:55.225]   - warning: kubectl-version overwrites existing command: "kubectl version"
I0225 22:06:55.225] 
I0225 22:06:55.225] error: one plugin warning was found
I0225 22:06:55.225] has:kubectl-version overwrites existing command: "kubectl version"
I0225 22:06:55.297] Successful
I0225 22:06:55.297] message:The following compatible plugins are available:
I0225 22:06:55.297] 
I0225 22:06:55.297] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0225 22:06:55.298] test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
I0225 22:06:55.298]   - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0225 22:06:55.298] 
I0225 22:06:55.298] error: one plugin warning was found
I0225 22:06:55.298] has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
I0225 22:06:55.367] Successful
I0225 22:06:55.368] message:The following compatible plugins are available:
I0225 22:06:55.368] 
I0225 22:06:55.368] test/fixtures/pkg/kubectl/plugins/kubectl-foo
I0225 22:06:55.368] has:plugins are available
I0225 22:06:55.438] Successful
I0225 22:06:55.438] message:
I0225 22:06:55.438] error: unable to find any kubectl plugins in your PATH
I0225 22:06:55.439] has:unable to find any kubectl plugins in your PATH
I0225 22:06:55.509] Successful
I0225 22:06:55.510] message:I am plugin foo
I0225 22:06:55.510] has:plugin foo
I0225 22:06:55.580] Successful
I0225 22:06:55.580] message:Client Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.0-alpha.0.348+ecee4ad3812b8c", GitCommit:"ecee4ad3812b8c765a4a79441c0584b171c21a8e", GitTreeState:"clean", BuildDate:"2019-02-25T22:00:20Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
... skipping 9 lines ...
I0225 22:06:55.671] 
I0225 22:06:55.673] +++ Running case: test-cmd.run_impersonation_tests 
I0225 22:06:55.675] +++ working dir: /go/src/k8s.io/kubernetes
I0225 22:06:55.677] +++ command: run_impersonation_tests
I0225 22:06:55.686] +++ [0225 22:06:55] Testing impersonation
I0225 22:06:55.750] Successful
I0225 22:06:55.751] message:error: requesting groups or user-extra for  without impersonating a user
I0225 22:06:55.751] has:without impersonating a user
I0225 22:06:55.899] certificatesigningrequest.certificates.k8s.io/foo created
I0225 22:06:55.992] authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
I0225 22:06:56.074] (Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
I0225 22:06:56.151] (Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
I0225 22:06:56.306] certificatesigningrequest.certificates.k8s.io/foo created
... skipping 37 lines ...
W0225 22:06:59.380] I0225 22:06:59.380089   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.380] I0225 22:06:59.380103   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.380] I0225 22:06:59.380122   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.381] I0225 22:06:59.380106   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.381] I0225 22:06:59.380324   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.381] I0225 22:06:59.380342   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.381] W0225 22:06:59.380682   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.381] W0225 22:06:59.380725   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.381] I0225 22:06:59.381028   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.381] I0225 22:06:59.381045   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.382] I0225 22:06:59.381071   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.382] I0225 22:06:59.381080   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.382] I0225 22:06:59.381107   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.382] I0225 22:06:59.381113   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 23 lines ...
W0225 22:06:59.385] I0225 22:06:59.382518   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.385] I0225 22:06:59.382537   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.385] I0225 22:06:59.382538   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.386] I0225 22:06:59.382547   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.386] I0225 22:06:59.382576   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.386] I0225 22:06:59.382588   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.386] W0225 22:06:59.382596   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.386] I0225 22:06:59.382640   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.386] I0225 22:06:59.382649   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.386] I0225 22:06:59.382942   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.387] I0225 22:06:59.382961   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.387] I0225 22:06:59.383044   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.387] I0225 22:06:59.383057   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 9 lines ...
W0225 22:06:59.388] I0225 22:06:59.383444   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.388] I0225 22:06:59.383451   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.388] I0225 22:06:59.383454   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.389] I0225 22:06:59.383458   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.389] I0225 22:06:59.383602   44034 secure_serving.go:160] Stopped listening on 127.0.0.1:8080
W0225 22:06:59.389] I0225 22:06:59.383624   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.389] W0225 22:06:59.384216   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.389] I0225 22:06:59.384240   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.389] W0225 22:06:59.384278   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.390] W0225 22:06:59.384292   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.390] W0225 22:06:59.384245   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.390] I0225 22:06:59.383733   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.390] W0225 22:06:59.384325   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.390] W0225 22:06:59.384341   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.390] W0225 22:06:59.384368   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.391] W0225 22:06:59.384382   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.391] W0225 22:06:59.384388   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.391] W0225 22:06:59.384415   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.391] W0225 22:06:59.384443   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.391] W0225 22:06:59.384452   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.392] W0225 22:06:59.384483   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.392] W0225 22:06:59.384495   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.392] W0225 22:06:59.384216   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.392] I0225 22:06:59.384332   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.392] W0225 22:06:59.383967   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.392] I0225 22:06:59.383841   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.393] I0225 22:06:59.384575   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.393] W0225 22:06:59.383935   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.393] I0225 22:06:59.383965   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: []
W0225 22:06:59.393] I0225 22:06:59.384777   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.393] W0225 22:06:59.383998   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.393] I0225 22:06:59.384489   44034 secure_serving.go:160] Stopped listening on 127.0.0.1:6443
W0225 22:06:59.394] W0225 22:06:59.384079   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.394] I0225 22:06:59.383728   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.394] I0225 22:06:59.385077   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.394] W0225 22:06:59.384080   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.394] I0225 22:06:59.385367   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.394] E0225 22:06:59.384076   44034 controller.go:172] rpc error: code = Unavailable desc = transport is closing
W0225 22:06:59.395] W0225 22:06:59.384108   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.395] W0225 22:06:59.384143   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.395] W0225 22:06:59.384146   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.395] W0225 22:06:59.384189   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.395] W0225 22:06:59.384271   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.395] I0225 22:06:59.381666   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.396] I0225 22:06:59.383756   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.396] W0225 22:06:59.384029   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.396] W0225 22:06:59.384043   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.396] I0225 22:06:59.384977   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.396] W0225 22:06:59.384050   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.396] I0225 22:06:59.385017   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.397] I0225 22:06:59.385176   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.397] I0225 22:06:59.385201   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.397] I0225 22:06:59.385224   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.397] I0225 22:06:59.385247   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.397] I0225 22:06:59.385269   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.397] W0225 22:06:59.384100   44034 clientconn.go:1304] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0225 22:06:59.397] I0225 22:06:59.385435   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.398] I0225 22:06:59.385469   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0225 22:06:59.398] I0225 22:06:59.385371   44034 establishing_controller.go:84] Shutting down EstablishingController
W0225 22:06:59.398] I0225 22:06:59.385549   44034 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
W0225 22:06:59.398] I0225 22:06:59.385558   44034 autoregister_controller.go:163] Shutting down autoregister controller
W0225 22:06:59.398] I0225 22:06:59.385562   44034 balancer_v1_wrapper.go:125] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
... skipping 59 lines ...
I0225 22:19:06.715] ok  	k8s.io/kubernetes/test/integration/serving	47.982s
I0225 22:19:06.715] ok  	k8s.io/kubernetes/test/integration/statefulset	12.390s
I0225 22:19:06.715] ok  	k8s.io/kubernetes/test/integration/storageclasses	4.714s
I0225 22:19:06.715] ok  	k8s.io/kubernetes/test/integration/tls	6.749s
I0225 22:19:06.715] ok  	k8s.io/kubernetes/test/integration/ttlcontroller	10.794s
I0225 22:19:06.715] ok  	k8s.io/kubernetes/test/integration/volume	91.902s
I0225 22:19:06.716] FAIL	k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/test/integration	93.186s
I0225 22:19:21.115] +++ [0225 22:19:21] Saved JUnit XML test report to /workspace/artifacts/junit_34ec65c8459586587b0004cdabcb6aa30b905266_20190225-220708.xml
I0225 22:19:21.118] Makefile:184: recipe for target 'test' failed
I0225 22:19:21.129] +++ [0225 22:19:21] Cleaning up etcd
W0225 22:19:21.230] make[1]: *** [test] Error 1
W0225 22:19:21.230] !!! [0225 22:19:21] Call tree:
W0225 22:19:21.231] !!! [0225 22:19:21]  1: hack/make-rules/test-integration.sh:99 runTests(...)
I0225 22:19:21.389] +++ [0225 22:19:21] Integration test cleanup complete
I0225 22:19:21.389] Makefile:203: recipe for target 'test-integration' failed
W0225 22:19:21.490] make: *** [test-integration] Error 1
W0225 22:19:23.992] Traceback (most recent call last):
W0225 22:19:23.993]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 178, in <module>
W0225 22:19:23.993]     ARGS.exclude_typecheck, ARGS.exclude_godep)
W0225 22:19:23.993]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 140, in main
W0225 22:19:23.994]     check(*cmd)
W0225 22:19:23.994]   File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_verify.py", line 48, in check
W0225 22:19:23.994]     subprocess.check_call(cmd)
W0225 22:19:23.994]   File "/usr/lib/python2.7/subprocess.py", line 186, in check_call
W0225 22:19:24.023]     raise CalledProcessError(retcode, cmd)
W0225 22:19:24.024] subprocess.CalledProcessError: Command '('docker', 'run', '--rm=true', '--privileged=true', '-v', '/var/run/docker.sock:/var/run/docker.sock', '-v', '/etc/localtime:/etc/localtime:ro', '-v', '/workspace/k8s.io/kubernetes:/go/src/k8s.io/kubernetes', '-v', '/workspace/k8s.io/:/workspace/k8s.io/', '-v', '/workspace/_artifacts:/workspace/artifacts', '-e', 'KUBE_FORCE_VERIFY_CHECKS=n', '-e', 'KUBE_VERIFY_GIT_BRANCH=master', '-e', 'EXCLUDE_TYPECHECK=n', '-e', 'EXCLUDE_GODEP=n', '-e', 'REPO_DIR=/workspace/k8s.io/kubernetes', '--tmpfs', '/tmp:exec,mode=1777', 'gcr.io/k8s-testimages/kubekins-test:1.13-v20190125-cc5d6ecff3', 'bash', '-c', 'cd kubernetes && ./hack/jenkins/test-dockerized.sh')' returned non-zero exit status 2
E0225 22:19:24.032] Command failed
I0225 22:19:24.033] process 702 exited with code 1 after 24.0m
E0225 22:19:24.033] FAIL: pull-kubernetes-integration
I0225 22:19:24.033] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0225 22:19:24.589] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0225 22:19:24.646] process 115778 exited with code 0 after 0.0m
I0225 22:19:24.646] Call:  gcloud config get-value account
I0225 22:19:24.958] process 115790 exited with code 0 after 0.0m
I0225 22:19:24.959] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0225 22:19:24.959] Upload result and artifacts...
I0225 22:19:24.959] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/73650/pull-kubernetes-integration/46303
I0225 22:19:24.959] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/73650/pull-kubernetes-integration/46303/artifacts
W0225 22:19:26.091] CommandException: One or more URLs matched no objects.
E0225 22:19:26.235] Command failed
I0225 22:19:26.235] process 115802 exited with code 1 after 0.0m
W0225 22:19:26.236] Remote dir gs://kubernetes-jenkins/pr-logs/pull/73650/pull-kubernetes-integration/46303/artifacts not exist yet
I0225 22:19:26.236] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/73650/pull-kubernetes-integration/46303/artifacts
I0225 22:19:30.621] process 115944 exited with code 0 after 0.1m
W0225 22:19:30.621] metadata path /workspace/_artifacts/metadata.json does not exist
W0225 22:19:30.621] metadata not found or invalid, init with empty metadata
... skipping 23 lines ...