This job view page is being replaced by Spyglass soon. Check out the new job view.
PReloyekunle: Extended CRD Validation
ResultFAILURE
Tests 1 failed / 2480 succeeded
Started2020-02-14 17:20
Elapsed26m10s
Revision8c1515d359910366fc73a9d5f0d588bf7e9aa3ca
Refs 88076

Test Failures


k8s.io/kubernetes/test/integration/apiserver/apply TestApplyCRDStructuralSchema 9.24s

go test -v k8s.io/kubernetes/test/integration/apiserver/apply -run TestApplyCRDStructuralSchema$
=== RUN   TestApplyCRDStructuralSchema
I0214 17:36:28.929075  108477 controller.go:181] Shutting down kubernetes service endpoint reconciler
I0214 17:36:28.929278  108477 dynamic_cafile_content.go:181] Shutting down request-header::/tmp/kubernetes-kube-apiserver547317363/proxy-ca.crt
I0214 17:36:28.929303  108477 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0214 17:36:28.929324  108477 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
I0214 17:36:28.929358  108477 controller.go:123] Shutting down OpenAPI controller
I0214 17:36:28.929390  108477 customresource_discovery_controller.go:220] Shutting down DiscoveryController
I0214 17:36:28.929416  108477 apiapproval_controller.go:198] Shutting down KubernetesAPIApprovalPolicyConformantConditionController
I0214 17:36:28.929431  108477 nonstructuralschema_controller.go:198] Shutting down NonStructuralSchemaConditionController
I0214 17:36:28.929445  108477 establishing_controller.go:87] Shutting down EstablishingController
I0214 17:36:28.929455  108477 secure_serving.go:222] Stopped listening on 127.0.0.1:42015
I0214 17:36:28.929463  108477 dynamic_cafile_content.go:181] Shutting down request-header::/tmp/kubernetes-kube-apiserver547317363/proxy-ca.crt
I0214 17:36:28.929468  108477 tlsconfig.go:256] Shutting down DynamicServingCertificateController
I0214 17:36:28.929475  108477 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/tmp/kubernetes-kube-apiserver547317363/client-ca.crt
I0214 17:36:28.929487  108477 naming_controller.go:302] Shutting down NamingConditionController
I0214 17:36:28.929489  108477 dynamic_serving_content.go:144] Shutting down serving-cert::/tmp/kubernetes-kube-apiserver547317363/apiserver.crt::/tmp/kubernetes-kube-apiserver547317363/apiserver.key
I0214 17:36:28.929499  108477 crd_finalizer.go:278] Shutting down CRDFinalizer
I0214 17:36:28.929512  108477 dynamic_cafile_content.go:181] Shutting down client-ca-bundle::/tmp/kubernetes-kube-apiserver547317363/client-ca.crt
I0214 17:36:28.929521  108477 autoregister_controller.go:165] Shutting down autoregister controller
I0214 17:36:28.929538  108477 available_controller.go:399] Shutting down AvailableConditionController
I0214 17:36:28.929514  108477 crdregistration_controller.go:142] Shutting down crd-autoregister controller
I0214 17:36:28.929557  108477 controller.go:87] Shutting down OpenAPI AggregationController
I0214 17:36:30.993147  108477 serving.go:307] Generated self-signed cert (/tmp/kubernetes-kube-apiserver963062582/apiserver.crt, /tmp/kubernetes-kube-apiserver963062582/apiserver.key)
I0214 17:36:30.993166  108477 server.go:628] external host was not specified, using 127.0.0.1
W0214 17:36:30.993176  108477 authentication.go:469] AnonymousAuth is not allowed with the AlwaysAllow authorizer. Resetting AnonymousAuth to false. You should use a different authorizer
W0214 17:36:32.055109  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 17:36:32.055145  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 17:36:32.055158  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 17:36:32.055341  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 17:36:32.056097  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 17:36:32.056143  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 17:36:32.056185  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 17:36:32.056224  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 17:36:32.056454  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 17:36:32.056637  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 17:36:32.056710  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 17:36:32.056776  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 17:36:32.056792  108477 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0214 17:36:32.056803  108477 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0214 17:36:32.057883  108477 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0214 17:36:32.057903  108477 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0214 17:36:32.059372  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.059415  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.060564  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.060595  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
W0214 17:36:32.093721  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 17:36:32.094723  108477 master.go:270] Using reconciler: lease
I0214 17:36:32.094988  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.095025  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.097630  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.097673  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.098960  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.098995  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.099868  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.099912  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.101118  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.101152  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.102145  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.102181  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.103010  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.103044  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.103837  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.103867  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.104812  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.104838  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.105813  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.105854  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.106687  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.107261  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.108199  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.108231  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.109577  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.109612  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.110454  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.110486  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.114894  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.114926  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.119350  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.119380  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.122680  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.122717  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.125302  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.125339  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.125885  108477 rest.go:113] the default service ipfamily for this cluster is: IPv4
I0214 17:36:32.267329  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.267384  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.268786  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.268834  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.269912  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.269970  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.270964  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.270997  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.271804  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.271833  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.272898  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.272927  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.273894  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.273920  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.275144  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.275180  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.276900  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.277056  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.278261  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.278292  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.279326  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.279354  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.280400  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.280439  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.281533  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.281561  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.301303  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.301514  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.308470  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.313680  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.318726  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.318772  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.320398  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.320456  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.329957  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.329994  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.330938  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.330979  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.332125  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.332155  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.333131  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.333191  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.337631  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.337670  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.338555  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.338593  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.341315  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.341348  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.342342  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.342369  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.343413  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.343474  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.345043  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.345074  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.346201  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.346249  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.347029  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.347055  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.348170  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.348208  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.350383  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.350407  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.351281  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.351309  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.352225  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.352252  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.353537  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.353566  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.354408  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.354444  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.355415  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.355453  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.356425  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.356454  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.357289  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.357365  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.358103  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.358131  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.358923  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.358955  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.360009  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.360177  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.361138  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.361172  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.362144  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.362183  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.363027  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.363059  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.364196  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.364232  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.365391  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.365426  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.366297  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.366335  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.367280  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.367315  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.368409  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.368444  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.369624  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.369662  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.370722  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.370757  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.371661  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.371694  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.373081  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.373126  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.374111  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.374150  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
W0214 17:36:32.680764  108477 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0214 17:36:32.823443  108477 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0214 17:36:32.823593  108477 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0214 17:36:32.839079  108477 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0214 17:36:32.839110  108477 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
W0214 17:36:32.840544  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 17:36:32.840766  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.840882  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:36:32.841801  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:32.841830  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
W0214 17:36:32.844237  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 17:36:33.054951  108477 client.go:361] parsed scheme: "endpoint"
I0214 17:36:33.055048  108477 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
W0214 17:36:33.429953  108477 reflector.go:402] k8s.io/client-go/informers/factory.go:135: watch of *v1.ValidatingWebhookConfiguration ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received
W0214 17:36:33.429966  108477 reflector.go:402] k8s.io/client-go/informers/factory.go:135: watch of *v1.StorageClass ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received
W0214 17:36:33.430063  108477 reflector.go:402] k8s.io/client-go/informers/factory.go:135: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received
W0214 17:36:33.430079  108477 reflector.go:402] k8s.io/client-go/informers/factory.go:135: watch of *v1.ServiceAccount ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received
W0214 17:36:33.430134  108477 reflector.go:402] k8s.io/client-go/informers/factory.go:135: watch of *v1.Namespace ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received
W0214 17:36:33.430184  108477 reflector.go:402] k8s.io/client-go/informers/factory.go:135: watch of *v1.Endpoints ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received
W0214 17:36:33.430251  108477 reflector.go:402] k8s.io/client-go/informers/factory.go:135: watch of *v1beta1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:135: Unexpected watch close - watch lasted less than a second and no items received
I0214 17:36:36.991665  108477 dynamic_cafile_content.go:166] Starting request-header::/tmp/kubernetes-kube-apiserver963062582/proxy-ca.crt
I0214 17:36:36.991695  108477 dynamic_cafile_content.go:166] Starting client-ca-bundle::/tmp/kubernetes-kube-apiserver963062582/client-ca.crt
I0214 17:36:36.992058  108477 dynamic_serving_content.go:129] Starting serving-cert::/tmp/kubernetes-kube-apiserver963062582/apiserver.crt::/tmp/kubernetes-kube-apiserver963062582/apiserver.key
I0214 17:36:36.992629  108477 secure_serving.go:178] Serving securely on 127.0.0.1:36761
I0214 17:36:36.992679  108477 tlsconfig.go:241] Starting DynamicServingCertificateController
I0214 17:36:36.992722  108477 autoregister_controller.go:141] Starting autoregister controller
I0214 17:36:36.992733  108477 cache.go:32] Waiting for caches to sync for autoregister controller
I0214 17:36:36.992778  108477 available_controller.go:387] Starting AvailableConditionController
I0214 17:36:36.992788  108477 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0214 17:36:36.992844  108477 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0214 17:36:36.992860  108477 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0214 17:36:36.992918  108477 crd_finalizer.go:266] Starting CRDFinalizer
I0214 17:36:36.993220  108477 controller.go:81] Starting OpenAPI AggregationController
I0214 17:36:36.993397  108477 crdregistration_controller.go:111] Starting crd-autoregister controller
I0214 17:36:36.993416  108477 shared_informer.go:206] Waiting for caches to sync for crd-autoregister
I0214 17:36:36.993447  108477 controller.go:86] Starting OpenAPI controller
I0214 17:36:36.993471  108477 customresource_discovery_controller.go:209] Starting DiscoveryController
I0214 17:36:36.993493  108477 naming_controller.go:291] Starting NamingConditionController
I0214 17:36:36.993519  108477 establishing_controller.go:76] Starting EstablishingController
I0214 17:36:36.993540  108477 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0214 17:36:36.993564  108477 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
W0214 17:36:36.994306  108477 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 17:36:36.994451  108477 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0214 17:36:36.994461  108477 shared_informer.go:206] Waiting for caches to sync for cluster_authentication_trust_controller
I0214 17:36:36.994611  108477 dynamic_cafile_content.go:166] Starting client-ca-bundle::/tmp/kubernetes-kube-apiserver963062582/client-ca.crt
I0214 17:36:36.994649  108477 dynamic_cafile_content.go:166] Starting request-header::/tmp/kubernetes-kube-apiserver963062582/proxy-ca.crt
E0214 17:36:37.039049  108477 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /ce2a047a-59cc-41c6-90f9-6bff3664200d/registry/masterleases/127.0.0.1, ResourceVersion: 0, AdditionalErrorMsg: 
I0214 17:36:37.092928  108477 cache.go:39] Caches are synced for autoregister controller
I0214 17:36:37.092934  108477 cache.go:39] Caches are synced for AvailableConditionController controller
I0214 17:36:37.093561  108477 shared_informer.go:213] Caches are synced for crd-autoregister 
I0214 17:36:37.093625  108477 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0214 17:36:37.094672  108477 shared_informer.go:213] Caches are synced for cluster_authentication_trust_controller 
E0214 17:36:37.944711  108477 controller.go:184] an error on the server ("") has prevented the request from succeeding (get endpoints kubernetes)
I0214 17:36:37.991680  108477 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0214 17:36:37.991723  108477 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0214 17:36:38.017712  108477 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0214 17:36:38.022748  108477 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0214 17:36:38.022775  108477 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
W0214 17:36:38.079879  108477 lease.go:224] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
E0214 17:36:38.081153  108477 controller.go:223] unable to sync kubernetes service: Endpoints "kubernetes" is invalid: subsets[0].addresses[0].ip: Invalid value: "127.0.0.1": may not be in the loopback range (127.0.0.0/8)
W0214 17:36:38.156423  108477 cacher.go:166] Terminating all watchers from cacher *apiextensions.CustomResourceDefinition
W0214 17:36:38.156904  108477 cacher.go:166] Terminating all watchers from cacher *core.LimitRange
W0214 17:36:38.157093  108477 cacher.go:166] Terminating all watchers from cacher *core.ResourceQuota
W0214 17:36:38.157286  108477 cacher.go:166] Terminating all watchers from cacher *core.Secret
W0214 17:36:38.157719  108477 cacher.go:166] Terminating all watchers from cacher *core.ConfigMap
W0214 17:36:38.157911  108477 cacher.go:166] Terminating all watchers from cacher *core.Namespace
W0214 17:36:38.158083  108477 cacher.go:166] Terminating all watchers from cacher *core.Endpoints
W0214 17:36:38.158441  108477 cacher.go:166] Terminating all watchers from cacher *core.Pod
W0214 17:36:38.158648  108477 cacher.go:166] Terminating all watchers from cacher *core.ServiceAccount
W0214 17:36:38.158855  108477 cacher.go:166] Terminating all watchers from cacher *core.Service
W0214 17:36:38.164212  108477 cacher.go:166] Terminating all watchers from cacher *node.RuntimeClass
W0214 17:36:38.166582  108477 cacher.go:166] Terminating all watchers from cacher *scheduling.PriorityClass
W0214 17:36:38.167670  108477 cacher.go:166] Terminating all watchers from cacher *storage.StorageClass
W0214 17:36:38.169330  108477 cacher.go:166] Terminating all watchers from cacher *admissionregistration.ValidatingWebhookConfiguration
W0214 17:36:38.169578  108477 cacher.go:166] Terminating all watchers from cacher *admissionregistration.MutatingWebhookConfiguration
W0214 17:36:38.169816  108477 cacher.go:166] Terminating all watchers from cacher *apiregistration.APIService
--- FAIL: TestApplyCRDStructuralSchema (9.24s)
    testserver.go:182: runtime-config=map[api/all:true]
    testserver.go:183: Starting kube-apiserver on port 36761...
    testserver.go:199: Waiting for /healthz to be ok...
    apply_crd_test.go:218: CustomResourceDefinition.apiextensions.k8s.io "noxus.mygroup.example.com" is invalid: [spec.validation.openAPIV3Schema.properties[spec].properties[ports].items.schema.properties[hostIP].default: Required value: default value must be set if key is not required, spec.validation.openAPIV3Schema.properties[spec].properties[ports].items.schema.properties[hostPort].default: Required value: default value must be set if key is not required, spec.validation.openAPIV3Schema.properties[spec].properties[ports].items.schema.properties[name].default: Required value: default value must be set if key is not required, spec.validation.openAPIV3Schema.properties[spec].properties[ports].items.schema.properties[protocol].default: Required value: default value must be set if key is not required, spec.validation.openAPIV3Schema.properties[spec].properties[ports].items.schema.properties[protocol].nullable: Forbidden: key cannot be nullable]

				from junit_20200214-173506.xml

Filter through log files | View test history on testgrid


Show 2480 Passed Tests

Show 4 Skipped Tests

Error lines from build-log.txt

Docker in Docker enabled, initializing...
================================================================================
Starting Docker: docker.
Waiting for docker to be ready, sleeping for 1 seconds.
[Barnacle] 2020/02/14 17:20:35 Cleaning up Docker data root...
[Barnacle] 2020/02/14 17:20:35 Removing all containers.
[Barnacle] 2020/02/14 17:20:35 Failed to list containers: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40
[Barnacle] 2020/02/14 17:20:35 Removing recently created images.
[Barnacle] 2020/02/14 17:20:35 Failed to list images: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40
[Barnacle] 2020/02/14 17:20:35 Pruning dangling images.
[Barnacle] 2020/02/14 17:20:35 Failed to list images: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40
[Barnacle] 2020/02/14 17:20:35 Pruning volumes.
[Barnacle] 2020/02/14 17:20:35 Failed to prune volumes: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40
[Barnacle] 2020/02/14 17:20:35 Done cleaning up Docker data root.
Remaining docker images and volumes are:
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
DRIVER              VOLUME NAME
Cleaning up binfmt_misc ...
================================================================================
... skipping 40 lines ...
Recording: record_command_canary
Running command: record_command_canary

+++ Running case: test-cmd.record_command_canary 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: record_command_canary
/home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh: line 155: bogus-expected-to-fail: command not found
!!! [0214 17:24:45] Call tree:
!!! [0214 17:24:45]  1: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:47 record_command_canary(...)
!!! [0214 17:24:45]  2: /home/prow/go/src/k8s.io/kubernetes/test/cmd/../../third_party/forked/shell2junit/sh2ju.sh:112 eVal(...)
!!! [0214 17:24:45]  3: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:131 juLog(...)
!!! [0214 17:24:45]  4: /home/prow/go/src/k8s.io/kubernetes/test/cmd/legacy-script.sh:159 record_command(...)
!!! [0214 17:24:45]  5: hack/make-rules/test-cmd.sh:35 source(...)
+++ exit code: 1
+++ error: 1
+++ [0214 17:24:45] Running kubeadm tests
+++ [0214 17:24:50] Building go targets for linux/amd64:
    cmd/kubeadm
+++ [0214 17:25:34] Running tests without code coverage
{"Time":"2020-02-14T17:27:03.734633816Z","Action":"output","Package":"k8s.io/kubernetes/cmd/kubeadm/test/cmd","Output":"ok  \tk8s.io/kubernetes/cmd/kubeadm/test/cmd\t50.112s\n"}
✓  cmd/kubeadm/test/cmd (50.112s)
... skipping 302 lines ...
+++ [0214 17:28:49] Building kube-controller-manager
+++ [0214 17:28:54] Building go targets for linux/amd64:
    cmd/kube-controller-manager
+++ [0214 17:29:23] Starting controller-manager
Flag --port has been deprecated, see --secure-port instead.
I0214 17:29:23.660206   55211 serving.go:313] Generated self-signed cert in-memory
W0214 17:29:23.976789   55211 authentication.go:410] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0214 17:29:23.976835   55211 authentication.go:268] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0214 17:29:23.976847   55211 authentication.go:292] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0214 17:29:23.976878   55211 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0214 17:29:23.976905   55211 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0214 17:29:23.976929   55211 controllermanager.go:161] Version: v1.18.0-alpha.5.127+bdcc6d4e537528
I0214 17:29:23.978031   55211 secure_serving.go:178] Serving securely on [::]:10257
I0214 17:29:23.978155   55211 tlsconfig.go:241] Starting DynamicServingCertificateController
I0214 17:29:23.978467   55211 deprecated_insecure_serving.go:53] Serving insecurely on [::]:10252
I0214 17:29:23.978539   55211 leaderelection.go:242] attempting to acquire leader lease  kube-system/kube-controller-manager...
... skipping 16 lines ...
W0214 17:29:24.246046   55211 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
W0214 17:29:24.246064   55211 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 17:29:24.246075   55211 controllermanager.go:533] Started "nodelifecycle"
I0214 17:29:24.246195   55211 node_lifecycle_controller.go:555] Starting node controller
I0214 17:29:24.246221   55211 shared_informer.go:206] Waiting for caches to sync for taint
W0214 17:29:24.246390   55211 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
E0214 17:29:24.246479   55211 core.go:90] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
W0214 17:29:24.246495   55211 controllermanager.go:525] Skipping "service"
W0214 17:29:24.246786   55211 mutation_detector.go:53] Mutation detector is enabled, this will result in memory leakage.
I0214 17:29:24.246842   55211 controllermanager.go:533] Started "pvc-protection"
I0214 17:29:24.246971   55211 pvc_protection_controller.go:101] Starting PVC protection controller
I0214 17:29:24.246991   55211 shared_informer.go:206] Waiting for caches to sync for PVC protection
I0214 17:29:24.247111   55211 controllermanager.go:533] Started "cronjob"
... skipping 130 lines ...
I0214 17:29:25.018440   55211 controllermanager.go:533] Started "daemonset"
I0214 17:29:25.018583   55211 daemon_controller.go:257] Starting daemon sets controller
I0214 17:29:25.018593   55211 shared_informer.go:206] Waiting for caches to sync for daemon sets
I0214 17:29:25.018704   55211 controllermanager.go:533] Started "csrcleaner"
I0214 17:29:25.018799   55211 cleaner.go:82] Starting CSR cleaner controller
I0214 17:29:25.018994   55211 node_lifecycle_controller.go:77] Sending events to api server
E0214 17:29:25.019029   55211 core.go:231] failed to start cloud node lifecycle controller: no cloud provider provided
W0214 17:29:25.019040   55211 controllermanager.go:525] Skipping "cloud-node-lifecycle"
I0214 17:29:25.019766   55211 controllermanager.go:533] Started "clusterrole-aggregation"
I0214 17:29:25.019879   55211 clusterroleaggregation_controller.go:149] Starting ClusterRoleAggregator
I0214 17:29:25.019899   55211 shared_informer.go:206] Waiting for caches to sync for ClusterRoleAggregator
I0214 17:29:25.026778   55211 controllermanager.go:533] Started "namespace"
I0214 17:29:25.027164   55211 controllermanager.go:533] Started "pv-protection"
... skipping 15 lines ...
  "gitCommit": "bdcc6d4e5375284f69ed96eacf0b4f086dcfd8d7",
  "gitTreeState": "clean",
  "buildDate": "2020-02-14T16:06:20Z",
  "goVersion": "go1.13.5",
  "compiler": "gc",
  "platform": "linux/amd64"
}W0214 17:29:25.060430   55211 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="127.0.0.1" does not exist
I0214 17:29:25.118764   55211 shared_informer.go:213] Caches are synced for daemon sets 
I0214 17:29:25.120065   55211 shared_informer.go:213] Caches are synced for ClusterRoleAggregator 
I0214 17:29:25.127887   55211 shared_informer.go:213] Caches are synced for PV protection 
I0214 17:29:25.128087   55211 shared_informer.go:213] Caches are synced for GC 
E0214 17:29:25.128877   55211 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
E0214 17:29:25.128909   55211 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
E0214 17:29:25.132199   55211 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
I0214 17:29:25.145640   55211 shared_informer.go:213] Caches are synced for TTL 
I0214 17:29:25.146399   55211 shared_informer.go:213] Caches are synced for taint 
I0214 17:29:25.146557   55211 taint_manager.go:187] Starting NoExecuteTaintManager
I0214 17:29:25.146584   55211 node_lifecycle_controller.go:1444] Initializing eviction metric for zone: 
I0214 17:29:25.146769   55211 node_lifecycle_controller.go:1210] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0214 17:29:25.147033   55211 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"127.0.0.1", UID:"bd864775-9232-45e1-825d-9f22ef2eaac4", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node 127.0.0.1 event: Registered Node 127.0.0.1 in Controller
... skipping 81 lines ...
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_RESTMapper_evaluation_tests
+++ [0214 17:29:28] Creating namespace namespace-1581701368-9065
namespace/namespace-1581701368-9065 created
Context "test" modified.
+++ [0214 17:29:28] Testing RESTMapper
+++ [0214 17:29:29] "kubectl get unknownresourcetype" returns error as expected: error: the server doesn't have a resource type "unknownresourcetype"
+++ exit code: 0
NAME                              SHORTNAMES   APIGROUP                       NAMESPACED   KIND
bindings                                                                      true         Binding
componentstatuses                 cs                                          false        ComponentStatus
configmaps                        cm                                          true         ConfigMap
endpoints                         ep                                          true         Endpoints
... skipping 57 lines ...
namespace/namespace-1581701373-7151 created
Context "test" modified.
+++ [0214 17:29:33] Testing clusterroles
rbac.sh:29: Successful get clusterroles/cluster-admin {{.metadata.name}}: cluster-admin
(Brbac.sh:30: Successful get clusterrolebindings/cluster-admin {{.metadata.name}}: cluster-admin
(BSuccessful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created (dry run)
clusterrole.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterroles.rbac.authorization.k8s.io "pod-admin" not found
has:clusterroles.rbac.authorization.k8s.io "pod-admin" not found
clusterrole.rbac.authorization.k8s.io/pod-admin created
rbac.sh:42: Successful get clusterrole/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "pod-admin" deleted
... skipping 18 lines ...
(Bclusterrole.rbac.authorization.k8s.io/url-reader created
rbac.sh:61: Successful get clusterrole/url-reader {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: get:
(Brbac.sh:62: Successful get clusterrole/url-reader {{range.rules}}{{range.nonResourceURLs}}{{.}}:{{end}}{{end}}: /logs/*:/healthz/*:
(Bclusterrole.rbac.authorization.k8s.io/aggregation-reader created
rbac.sh:64: Successful get clusterrole/aggregation-reader {{.metadata.name}}: aggregation-reader
(BSuccessful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin created (server dry run)
Successful
message:Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
has:clusterrolebindings.rbac.authorization.k8s.io "super-admin" not found
clusterrolebinding.rbac.authorization.k8s.io/super-admin created
rbac.sh:77: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
(Bclusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (dry run)
clusterrolebinding.rbac.authorization.k8s.io/super-admin subjects updated (server dry run)
rbac.sh:80: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:
... skipping 58 lines ...
rbac.sh:102: Successful get clusterrolebinding/super-admin {{range.subjects}}{{.name}}:{{end}}: super-admin:foo:test-all-user:
(Brbac.sh:103: Successful get clusterrolebinding/super-group {{range.subjects}}{{.name}}:{{end}}: the-group:foo:test-all-user:
(Brbac.sh:104: Successful get clusterrolebinding/super-sa {{range.subjects}}{{.name}}:{{end}}: sa-name:foo:test-all-user:
(Brolebinding.rbac.authorization.k8s.io/admin created (dry run)
rolebinding.rbac.authorization.k8s.io/admin created (server dry run)
Successful
message:Error from server (NotFound): rolebindings.rbac.authorization.k8s.io "admin" not found
has: not found
rolebinding.rbac.authorization.k8s.io/admin created
rbac.sh:113: Successful get rolebinding/admin {{.roleRef.kind}}: ClusterRole
(Brbac.sh:114: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:
(Brolebinding.rbac.authorization.k8s.io/admin subjects updated
rbac.sh:116: Successful get rolebinding/admin {{range.subjects}}{{.name}}:{{end}}: default-admin:foo:
... skipping 25 lines ...
namespace/namespace-1581701381-10167 created
Context "test" modified.
+++ [0214 17:29:41] Testing role
role.rbac.authorization.k8s.io/pod-admin created (dry run)
role.rbac.authorization.k8s.io/pod-admin created (server dry run)
Successful
message:Error from server (NotFound): roles.rbac.authorization.k8s.io "pod-admin" not found
has: not found
role.rbac.authorization.k8s.io/pod-admin created
rbac.sh:155: Successful get role/pod-admin {{range.rules}}{{range.verbs}}{{.}}:{{end}}{{end}}: *:
(Brbac.sh:156: Successful get role/pod-admin {{range.rules}}{{range.resources}}{{.}}:{{end}}{{end}}: pods:
(Brbac.sh:157: Successful get role/pod-admin {{range.rules}}{{range.apiGroups}}{{.}}:{{end}}{{end}}: :
(BSuccessful
... skipping 459 lines ...
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
core.sh:189: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: resource(s) were provided, but no name, label selector, or --all flag specified
core.sh:193: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:197: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Berror: setting 'all' parameter but found a non empty selector. 
core.sh:201: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:205: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:209: Successful get pods -l'name in (valid-pod)' {{range.items}}{{.metadata.name}}:{{end}}: 
(Bcore.sh:214: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"test-kubectl-describe-pod\" }}found{{end}}{{end}}:: :
... skipping 19 lines ...
(Bpoddisruptionbudget.policy/test-pdb-2 created
core.sh:258: Successful get pdb/test-pdb-2 --namespace=test-kubectl-describe-pod {{.spec.minAvailable}}: 50%
(Bpoddisruptionbudget.policy/test-pdb-3 created
core.sh:264: Successful get pdb/test-pdb-3 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 2
(Bpoddisruptionbudget.policy/test-pdb-4 created
core.sh:268: Successful get pdb/test-pdb-4 --namespace=test-kubectl-describe-pod {{.spec.maxUnavailable}}: 50%
(Berror: min-available and max-unavailable cannot be both specified
core.sh:274: Successful get pods --namespace=test-kubectl-describe-pod {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/env-test-pod created
matched TEST_CMD_1
matched <set to the key 'key-1' in secret 'test-secret'>
matched TEST_CMD_2
matched <set to the key 'key-2' of config map 'test-configmap'>
... skipping 206 lines ...
(Bpod/valid-pod patched
core.sh:517: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: changed-with-yaml:
(Bpod/valid-pod patched
core.sh:522: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:3.1:
(Bpod/valid-pod patched
core.sh:538: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(B+++ [0214 17:30:11] "kubectl patch with resourceVersion 546" returns error as expected: Error from server (Conflict): Operation cannot be fulfilled on pods "valid-pod": the object has been modified; please apply your changes to the latest version and try again
pod "valid-pod" deleted
pod/valid-pod replaced
core.sh:562: Successful get pod valid-pod {{(index .spec.containers 0).name}}: replaced-k8s-serve-hostname
(BSuccessful
message:error: --grace-period must have --force specified
has:\-\-grace-period must have \-\-force specified
Successful
message:error: --timeout must have --force specified
has:\-\-timeout must have \-\-force specified
node/node-v1-test created
W0214 17:30:12.890396   55211 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="node-v1-test" does not exist
node/node-v1-test replaced
core.sh:599: Successful get node node-v1-test {{.metadata.annotations.a}}: b
(Bnode "node-v1-test" deleted
core.sh:606: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: nginx:
(Bcore.sh:609: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: k8s.gcr.io/serve_hostname:
(BEdit cancelled, no changes made.
... skipping 22 lines ...
spec:
  containers:
  - image: k8s.gcr.io/pause:2.0
    name: kubernetes-pause
has:localonlyvalue
core.sh:632: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Berror: 'name' already has a value (valid-pod), and --overwrite is false
core.sh:636: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bcore.sh:640: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod
(Bpod/valid-pod labeled
core.sh:644: Successful get pod valid-pod {{.metadata.labels.name}}: valid-pod-super-sayan
(Bcore.sh:648: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
... skipping 83 lines ...
+++ Running case: test-cmd.run_kubectl_create_error_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_kubectl_create_error_tests
+++ [0214 17:30:23] Creating namespace namespace-1581701423-17053
namespace/namespace-1581701423-17053 created
Context "test" modified.
+++ [0214 17:30:23] Testing kubectl create with error
Error: must specify one of -f and -k

Create a resource from a file or from stdin.

 JSON and YAML formats are accepted.

Examples:
... skipping 41 lines ...

Usage:
  kubectl create -f FILENAME [options]

Use "kubectl <command> --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
+++ [0214 17:30:23] "kubectl create with empty string list returns error as expected: error: error validating "hack/testdata/invalid-rc-with-empty-args.yaml": error validating data: ValidationError(ReplicationController.spec.template.spec.containers[0].args): unknown object type "nil" in ReplicationController.spec.template.spec.containers[0].args[0]; if you choose to ignore these errors, turn validation off with --validate=false
kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
+++ exit code: 0
Recording: run_kubectl_apply_tests
Running command: run_kubectl_apply_tests

... skipping 17 lines ...
(Bpod "test-pod" deleted
customresourcedefinition.apiextensions.k8s.io/resources.mygroup.example.com created
I0214 17:30:26.662510   51768 client.go:361] parsed scheme: "endpoint"
I0214 17:30:26.662560   51768 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379  <nil> 0 <nil>}]
I0214 17:30:26.666009   51768 controller.go:606] quota admission added evaluator for: resources.mygroup.example.com
kind.mygroup.example.com/myobj serverside-applied (server dry run)
Error from server (NotFound): resources.mygroup.example.com "myobj" not found
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
+++ exit code: 0
Recording: run_kubectl_run_tests
Running command: run_kubectl_run_tests

+++ Running case: test-cmd.run_kubectl_run_tests 
... skipping 12 lines ...
(Bpod "nginx-extensions" deleted
Successful
message:pod/test1 created
has:pod/test1 created
pod "test1" deleted
Successful
message:error: Invalid image name "InvalidImageName": invalid reference format
has:error: Invalid image name "InvalidImageName": invalid reference format
+++ exit code: 0
Recording: run_kubectl_create_filter_tests
Running command: run_kubectl_create_filter_tests

+++ Running case: test-cmd.run_kubectl_create_filter_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 3 lines ...
Context "test" modified.
+++ [0214 17:30:28] Testing kubectl create filter
create.sh:50: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/selector-test-pod created
create.sh:54: Successful get pods selector-test-pod {{.metadata.labels.name}}: selector-test-pod
(BSuccessful
message:Error from server (NotFound): pods "selector-test-pod-dont-apply" not found
has:pods "selector-test-pod-dont-apply" not found
pod "selector-test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_apply_deployments_tests
Running command: run_kubectl_apply_deployments_tests

... skipping 31 lines ...
I0214 17:30:31.571209   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701429-16792", Name:"nginx", UID:"100aca63-19c6-4880-9902-9aafdd15bc5c", APIVersion:"apps/v1", ResourceVersion:"604", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-8484dd655 to 3
I0214 17:30:31.576117   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701429-16792", Name:"nginx-8484dd655", UID:"a8fe0910-9317-434c-bdad-decf6c8979e8", APIVersion:"apps/v1", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-tmfzh
I0214 17:30:31.580367   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701429-16792", Name:"nginx-8484dd655", UID:"a8fe0910-9317-434c-bdad-decf6c8979e8", APIVersion:"apps/v1", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-vrj5k
I0214 17:30:31.581255   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701429-16792", Name:"nginx-8484dd655", UID:"a8fe0910-9317-434c-bdad-decf6c8979e8", APIVersion:"apps/v1", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-8484dd655-xmlng
apps.sh:149: Successful get deployment nginx {{.metadata.name}}: nginx
(BSuccessful
message:Error from server (Conflict): error when applying patch:
{"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"Deployment\",\"metadata\":{\"annotations\":{},\"labels\":{\"name\":\"nginx\"},\"name\":\"nginx\",\"namespace\":\"namespace-1581701429-16792\",\"resourceVersion\":\"99\"},\"spec\":{\"replicas\":3,\"selector\":{\"matchLabels\":{\"name\":\"nginx2\"}},\"template\":{\"metadata\":{\"labels\":{\"name\":\"nginx2\"}},\"spec\":{\"containers\":[{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"nginx\",\"ports\":[{\"containerPort\":80}]}]}}}}\n"},"resourceVersion":"99"},"spec":{"selector":{"matchLabels":{"name":"nginx2"}},"template":{"metadata":{"labels":{"name":"nginx2"}}}}}
to:
Resource: "apps/v1, Resource=deployments", GroupVersionKind: "apps/v1, Kind=Deployment"
Name: "nginx", Namespace: "namespace-1581701429-16792"
for: "hack/testdata/deployment-label-change2.yaml": Operation cannot be fulfilled on deployments.apps "nginx": the object has been modified; please apply your changes to the latest version and try again
has:Error from server (Conflict)
I0214 17:30:37.688172   55211 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1581701420-1454
deployment.apps/nginx configured
I0214 17:30:41.134273   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701429-16792", Name:"nginx", UID:"e556d634-5241-4132-a9ed-d8ccb69afd1d", APIVersion:"apps/v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-668b6c7744 to 3
I0214 17:30:41.138831   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701429-16792", Name:"nginx-668b6c7744", UID:"25338840-37ed-43be-b3f2-53311ba8afda", APIVersion:"apps/v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-484tn
I0214 17:30:41.141431   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701429-16792", Name:"nginx-668b6c7744", UID:"25338840-37ed-43be-b3f2-53311ba8afda", APIVersion:"apps/v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-xnhhk
I0214 17:30:41.143995   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701429-16792", Name:"nginx-668b6c7744", UID:"25338840-37ed-43be-b3f2-53311ba8afda", APIVersion:"apps/v1", ResourceVersion:"647", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-668b6c7744-kvnkr
... skipping 147 lines ...
+++ [0214 17:30:49] Creating namespace namespace-1581701449-12123
namespace/namespace-1581701449-12123 created
Context "test" modified.
+++ [0214 17:30:49] Testing kubectl get
get.sh:29: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:37: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
get.sh:45: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:{
    "apiVersion": "v1",
    "items": [],
... skipping 23 lines ...
has not:No resources found
Successful
message:NAME
has not:No resources found
get.sh:73: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:error: the server doesn't have a resource type "foobar"
has not:No resources found
Successful
message:No resources found in namespace-1581701449-12123 namespace.
has:No resources found
Successful
message:
has not:No resources found
Successful
message:No resources found in namespace-1581701449-12123 namespace.
has:No resources found
get.sh:93: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
Successful
message:Error from server (NotFound): pods "abc" not found
has not:List
Successful
message:I0214 17:30:51.053699   66230 loader.go:375] Config loaded from file:  /tmp/tmp.q8V4h0HSIr/.kube/config
I0214 17:30:51.055180   66230 round_trippers.go:443] GET http://127.0.0.1:8080/version?timeout=32s 200 OK in 1 milliseconds
I0214 17:30:51.081366   66230 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/pods 200 OK in 1 milliseconds
I0214 17:30:51.088699   66230 round_trippers.go:443] GET http://127.0.0.1:8080/api/v1/namespaces/default/replicationcontrollers 200 OK in 6 milliseconds
... skipping 482 lines ...
Successful
message:NAME    DATA   AGE
one     0      0s
three   0      0s
two     0      0s
STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
Successful
message:STATUS    REASON          MESSAGE
Failure   InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has not:watch is only supported on individual resources
+++ [0214 17:30:57] Creating namespace namespace-1581701457-15419
namespace/namespace-1581701457-15419 created
Context "test" modified.
get.sh:153: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
... skipping 56 lines ...
}
get.sh:158: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(B<no value>Successful
message:valid-pod:
has:valid-pod:
Successful
message:error: error executing jsonpath "{.missing}": Error executing template: missing is not found. Printing more information for debugging the template:
	template was:
		{.missing}
	object given to jsonpath engine was:
		map[string]interface {}{"apiVersion":"v1", "kind":"Pod", "metadata":map[string]interface {}{"creationTimestamp":"2020-02-14T17:30:58Z", "labels":map[string]interface {}{"name":"valid-pod"}, "name":"valid-pod", "namespace":"namespace-1581701457-15419", "resourceVersion":"731", "selfLink":"/api/v1/namespaces/namespace-1581701457-15419/pods/valid-pod", "uid":"d44afb08-a7d5-4a34-ad95-86d46d62f7fb"}, "spec":map[string]interface {}{"containers":[]interface {}{map[string]interface {}{"image":"k8s.gcr.io/serve_hostname", "imagePullPolicy":"Always", "name":"kubernetes-serve-hostname", "resources":map[string]interface {}{"limits":map[string]interface {}{"cpu":"1", "memory":"512Mi"}, "requests":map[string]interface {}{"cpu":"1", "memory":"512Mi"}}, "terminationMessagePath":"/dev/termination-log", "terminationMessagePolicy":"File"}}, "dnsPolicy":"ClusterFirst", "enableServiceLinks":true, "priority":0, "restartPolicy":"Always", "schedulerName":"default-scheduler", "securityContext":map[string]interface {}{}, "terminationGracePeriodSeconds":30}, "status":map[string]interface {}{"phase":"Pending", "qosClass":"Guaranteed"}}
has:missing is not found
error: error executing template "{{.missing}}": template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing"
Successful
message:Error executing template: template: output:1:2: executing "output" at <.missing>: map has no entry for key "missing". Printing more information for debugging the template:
	template was:
		{{.missing}}
	raw data was:
		{"apiVersion":"v1","kind":"Pod","metadata":{"creationTimestamp":"2020-02-14T17:30:58Z","labels":{"name":"valid-pod"},"name":"valid-pod","namespace":"namespace-1581701457-15419","resourceVersion":"731","selfLink":"/api/v1/namespaces/namespace-1581701457-15419/pods/valid-pod","uid":"d44afb08-a7d5-4a34-ad95-86d46d62f7fb"},"spec":{"containers":[{"image":"k8s.gcr.io/serve_hostname","imagePullPolicy":"Always","name":"kubernetes-serve-hostname","resources":{"limits":{"cpu":"1","memory":"512Mi"},"requests":{"cpu":"1","memory":"512Mi"}},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File"}],"dnsPolicy":"ClusterFirst","enableServiceLinks":true,"priority":0,"restartPolicy":"Always","schedulerName":"default-scheduler","securityContext":{},"terminationGracePeriodSeconds":30},"status":{"phase":"Pending","qosClass":"Guaranteed"}}
	object given to template engine was:
		map[apiVersion:v1 kind:Pod metadata:map[creationTimestamp:2020-02-14T17:30:58Z labels:map[name:valid-pod] name:valid-pod namespace:namespace-1581701457-15419 resourceVersion:731 selfLink:/api/v1/namespaces/namespace-1581701457-15419/pods/valid-pod uid:d44afb08-a7d5-4a34-ad95-86d46d62f7fb] spec:map[containers:[map[image:k8s.gcr.io/serve_hostname imagePullPolicy:Always name:kubernetes-serve-hostname resources:map[limits:map[cpu:1 memory:512Mi] requests:map[cpu:1 memory:512Mi]] terminationMessagePath:/dev/termination-log terminationMessagePolicy:File]] dnsPolicy:ClusterFirst enableServiceLinks:true priority:0 restartPolicy:Always schedulerName:default-scheduler securityContext:map[] terminationGracePeriodSeconds:30] status:map[phase:Pending qosClass:Guaranteed]]
has:map has no entry for key "missing"
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:STATUS
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:valid-pod
Successful
message:pod/valid-pod
status/<unknown>
has not:STATUS
Successful
... skipping 45 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has not:STATUS
... skipping 42 lines ...
      (Client.Timeout exceeded while reading body)'
    reason: UnexpectedServerResponse
  - message: 'unable to decode an event from the watch stream: net/http: request canceled
      (Client.Timeout exceeded while reading body)'
    reason: ClientWatchDecoding
kind: Status
message: 'an error on the server ("unable to decode an event from the watch stream:
  net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented
  the request from succeeding'
metadata: {}
reason: InternalError
status: Failure
has:name: valid-pod
Successful
message:Error from server (NotFound): pods "invalid-pod" not found
has:"invalid-pod" not found
pod "valid-pod" deleted
get.sh:196: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/redis-master created
pod/valid-pod created
Successful
... skipping 35 lines ...
+++ command: run_kubectl_exec_pod_tests
+++ [0214 17:31:04] Creating namespace namespace-1581701464-15551
namespace/namespace-1581701464-15551 created
Context "test" modified.
+++ [0214 17:31:04] Testing kubectl exec POD COMMAND
Successful
message:Error from server (NotFound): pods "abc" not found
has:pods "abc" not found
pod/test-pod created
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pods "test-pod" not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
+++ exit code: 0
Recording: run_kubectl_exec_resource_name_tests
Running command: run_kubectl_exec_resource_name_tests

... skipping 2 lines ...
+++ command: run_kubectl_exec_resource_name_tests
+++ [0214 17:31:04] Creating namespace namespace-1581701464-21609
namespace/namespace-1581701464-21609 created
Context "test" modified.
+++ [0214 17:31:04] Testing kubectl exec TYPE/NAME COMMAND
Successful
message:error: the server doesn't have a resource type "foo"
has:error:
Successful
message:Error from server (NotFound): deployments.apps "bar" not found
has:"bar" not found
pod/test-pod created
replicaset.apps/frontend created
I0214 17:31:05.726174   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701464-21609", Name:"frontend", UID:"f3b51146-0198-4f04-91cd-f813ae898b99", APIVersion:"apps/v1", ResourceVersion:"792", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-c69sg
I0214 17:31:05.730088   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701464-21609", Name:"frontend", UID:"f3b51146-0198-4f04-91cd-f813ae898b99", APIVersion:"apps/v1", ResourceVersion:"792", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xgcbk
I0214 17:31:05.730127   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701464-21609", Name:"frontend", UID:"f3b51146-0198-4f04-91cd-f813ae898b99", APIVersion:"apps/v1", ResourceVersion:"792", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-kjdqh
configmap/test-set-env-config created
Successful
message:error: cannot attach to *v1.ConfigMap: selector for *v1.ConfigMap not implemented
has:not implemented
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod test-pod does not have a host assigned
has not:pod or type/name must be specified
Successful
message:Error from server (BadRequest): pod frontend-c69sg does not have a host assigned
has not:not found
Successful
message:Error from server (BadRequest): pod frontend-c69sg does not have a host assigned
has not:pod or type/name must be specified
pod "test-pod" deleted
replicaset.apps "frontend" deleted
configmap "test-set-env-config" deleted
+++ exit code: 0
Recording: run_create_secret_tests
Running command: run_create_secret_tests

+++ Running case: test-cmd.run_create_secret_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_create_secret_tests
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
W0214 17:31:06.776138   67408 helpers.go:534] --dry-run is deprecated and can be replaced with --dry-run=client.
Successful
message:Error from server (NotFound): secrets "mysecret" not found
has:secrets "mysecret" not found
Successful
message:user-specified
has:user-specified
W0214 17:31:06.955778   67438 helpers.go:534] --dry-run is deprecated and can be replaced with --dry-run=client.
Successful
... skipping 4 lines ...
has:uid
Successful
message:{"kind":"ConfigMap","apiVersion":"v1","metadata":{"name":"tester-update-cm","namespace":"default","selfLink":"/api/v1/namespaces/default/configmaps/tester-update-cm","uid":"776a3a25-152a-4b53-878f-6313cf95794a","resourceVersion":"815","creationTimestamp":"2020-02-14T17:31:07Z"},"data":{"key1":"config1"}}
has:config1
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"tester-update-cm","kind":"configmaps","uid":"776a3a25-152a-4b53-878f-6313cf95794a"}}
Successful
message:Error from server (NotFound): configmaps "tester-update-cm" not found
has:configmaps "tester-update-cm" not found
+++ exit code: 0
Recording: run_kubectl_create_kustomization_directory_tests
Running command: run_kubectl_create_kustomization_directory_tests

+++ Running case: test-cmd.run_kubectl_create_kustomization_directory_tests 
... skipping 110 lines ...
valid-pod   0/1     Pending   0          0s
has:valid-pod
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          0s
STATUS      REASON          MESSAGE
Failure     InternalError   an error on the server ("unable to decode an event from the watch stream: net/http: request canceled (Client.Timeout exceeded while reading body)") has prevented the request from succeeding
has:Timeout exceeded while reading body
Successful
message:NAME        READY   STATUS    RESTARTS   AGE
valid-pod   0/1     Pending   0          1s
has:valid-pod
Successful
message:error: Invalid timeout value. Timeout must be a single integer in seconds, or an integer followed by a corresponding time unit (e.g. 1s | 2m | 3h)
has:Invalid timeout value
pod "valid-pod" deleted
+++ exit code: 0
Recording: run_crd_tests
Running command: run_crd_tests

... skipping 158 lines ...
foo.company.com/test patched
crd.sh:236: Successful get foos/test {{.patched}}: value1
(Bfoo.company.com/test patched
crd.sh:238: Successful get foos/test {{.patched}}: value2
(Bfoo.company.com/test patched
crd.sh:240: Successful get foos/test {{.patched}}: <no value>
(B+++ [0214 17:31:17] "kubectl patch --local" returns error as expected for CustomResource: error: cannot apply strategic merge patch for company.com/v1, Kind=Foo locally, try --type merge
{
    "apiVersion": "company.com/v1",
    "kind": "Foo",
    "metadata": {
        "annotations": {
            "kubernetes.io/change-cause": "kubectl patch foos/test --server=http://127.0.0.1:8080 --match-server-version=true --patch={\"patched\":null} --type=merge --record=true"
... skipping 193 lines ...
(Bcrd.sh:450: Successful get bars {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace/non-native-resources created
bar.company.com/test created
crd.sh:455: Successful get bars {{len .items}}: 1
(Bnamespace "non-native-resources" deleted
crd.sh:458: Successful get bars {{len .items}}: 0
(BError from server (NotFound): namespaces "non-native-resources" not found
customresourcedefinition.apiextensions.k8s.io "foos.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "bars.company.com" deleted
customresourcedefinition.apiextensions.k8s.io "resources.mygroup.example.com" deleted
customresourcedefinition.apiextensions.k8s.io "validfoos.company.com" deleted
+++ exit code: 0
+++ [0214 17:31:42] Testing recursive resources
... skipping 2 lines ...
Context "test" modified.
generic-resources.sh:202: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(Bgeneric-resources.sh:206: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:pod/busybox0 created
pod/busybox1 created
error: error validating "hack/testdata/recursive/pod/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
W0214 17:31:42.734305   51768 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0214 17:31:42.735384   55211 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 17:31:42.736597   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:211: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BW0214 17:31:42.852738   51768 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0214 17:31:42.853668   55211 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 17:31:42.854290   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
W0214 17:31:42.949071   51768 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0214 17:31:42.950162   55211 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 17:31:42.950874   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:220: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: busybox:busybox:
(BSuccessful
message:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
W0214 17:31:43.064273   51768 cacher.go:166] Terminating all watchers from cacher *unstructured.Unstructured
E0214 17:31:43.065495   55211 reflector.go:380] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 17:31:43.066185   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:227: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:231: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:pod/busybox0 replaced
pod/busybox1 replaced
error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:236: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:Name:         busybox0
Namespace:    namespace-1581701502-30436
Priority:     0
Node:         <none>
... skipping 159 lines ...
has:Object 'Kind' is missing
generic-resources.sh:246: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:250: Successful get pods {{range.items}}{{.metadata.annotations.annotatekey}}:{{end}}: annotatevalue:annotatevalue:
(BSuccessful
message:pod/busybox0 annotated
pod/busybox1 annotated
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:255: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:259: Successful get pods {{range.items}}{{.metadata.labels.status}}:{{end}}: replaced:replaced:
(BSuccessful
message:error: error validating "hack/testdata/recursive/pod-modify/pod/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
has:error validating data: kind not set
generic-resources.sh:265: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx created
I0214 17:31:44.536217   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701502-30436", Name:"nginx", UID:"792d40ef-445b-49f1-b1bc-ecd0e73a6e57", APIVersion:"apps/v1", ResourceVersion:"995", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-f87d999f7 to 3
I0214 17:31:44.538223   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701502-30436", Name:"nginx-f87d999f7", UID:"91f04abe-e688-48e1-b829-74252b478b7c", APIVersion:"apps/v1", ResourceVersion:"996", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-kftrm
I0214 17:31:44.540334   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701502-30436", Name:"nginx-f87d999f7", UID:"91f04abe-e688-48e1-b829-74252b478b7c", APIVersion:"apps/v1", ResourceVersion:"996", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-7klnv
I0214 17:31:44.541496   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701502-30436", Name:"nginx-f87d999f7", UID:"91f04abe-e688-48e1-b829-74252b478b7c", APIVersion:"apps/v1", ResourceVersion:"996", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-f87d999f7-b6sc2
E0214 17:31:44.550365   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:269: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx:
(Bgeneric-resources.sh:270: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bkubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
generic-resources.sh:274: Successful get deployment nginx {{ .apiVersion }}: apps/v1
(BSuccessful
... skipping 42 lines ...
deployment.apps "nginx" deleted
generic-resources.sh:281: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:285: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:kubectl convert is DEPRECATED and will be removed in a future version.
In order to convert, kubectl apply the object to the cluster, then kubectl get at the desired version.
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
E0214 17:31:45.331288   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:290: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:busybox0:busybox1:
Successful
message:busybox0:busybox1:error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:299: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
E0214 17:31:45.682566   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:304: Successful get pods {{range.items}}{{.metadata.labels.mylabel}}:{{end}}: myvalue:myvalue:
(BSuccessful
message:pod/busybox0 labeled
pod/busybox1 labeled
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:309: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bpod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
E0214 17:31:45.952395   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:314: Successful get pods {{range.items}}{{(index .spec.containers 0).image}}:{{end}}: prom/busybox:prom/busybox:
(BSuccessful
message:pod/busybox0 patched
pod/busybox1 patched
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:319: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:323: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "busybox0" force deleted
pod "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/pod/pod/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"Pod","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}'
has:Object 'Kind' is missing
generic-resources.sh:328: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0214 17:31:46.445535   55211 namespace_controller.go:185] Namespace has been deleted non-native-resources
replicationcontroller/busybox0 created
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0214 17:31:46.557556   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581701502-30436", Name:"busybox0", UID:"0ee50cb1-799e-4a7a-bdb4-b0c24203e6c3", APIVersion:"v1", ResourceVersion:"1027", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-m8xth
I0214 17:31:46.562140   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581701502-30436", Name:"busybox1", UID:"7ff423fa-30f9-408e-b87e-ab7cf5775d91", APIVersion:"v1", ResourceVersion:"1029", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-pwdpg
generic-resources.sh:332: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:337: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:338: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:339: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:344: Successful get hpa busybox0 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(Bgeneric-resources.sh:345: Successful get hpa busybox1 {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 80
(BSuccessful
message:horizontalpodautoscaler.autoscaling/busybox0 autoscaled
horizontalpodautoscaler.autoscaling/busybox1 autoscaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
horizontalpodautoscaler.autoscaling "busybox0" deleted
horizontalpodautoscaler.autoscaling "busybox1" deleted
generic-resources.sh:353: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(Bgeneric-resources.sh:354: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:355: Successful get rc busybox1 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:359: Successful get service busybox0 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(Bgeneric-resources.sh:360: Successful get service busybox1 {{(index .spec.ports 0).name}} {{(index .spec.ports 0).port}}: <no value> 80
(BSuccessful
message:service/busybox0 exposed
service/busybox1 exposed
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:366: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0214 17:31:48.117094   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:367: Successful get rc busybox0 {{.spec.replicas}}: 1
(Bgeneric-resources.sh:368: Successful get rc busybox1 {{.spec.replicas}}: 1
(BI0214 17:31:48.365639   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581701502-30436", Name:"busybox0", UID:"0ee50cb1-799e-4a7a-bdb4-b0c24203e6c3", APIVersion:"v1", ResourceVersion:"1053", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-ljvds
I0214 17:31:48.377323   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581701502-30436", Name:"busybox1", UID:"7ff423fa-30f9-408e-b87e-ab7cf5775d91", APIVersion:"v1", ResourceVersion:"1057", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-s9qmw
generic-resources.sh:372: Successful get rc busybox0 {{.spec.replicas}}: 2
(Bgeneric-resources.sh:373: Successful get rc busybox1 {{.spec.replicas}}: 2
(BSuccessful
message:replicationcontroller/busybox0 scaled
replicationcontroller/busybox1 scaled
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:378: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BE0214 17:31:48.682562   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
generic-resources.sh:382: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(BSuccessful
message:warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
generic-resources.sh:387: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdeployment.apps/nginx1-deployment created
I0214 17:31:49.143264   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701502-30436", Name:"nginx1-deployment", UID:"c33cd291-e084-4697-baec-a7ad4b0b33e2", APIVersion:"apps/v1", ResourceVersion:"1074", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx1-deployment-7bdbbfb5cf to 2
deployment.apps/nginx0-deployment created
I0214 17:31:49.146930   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701502-30436", Name:"nginx1-deployment-7bdbbfb5cf", UID:"e023e516-9fef-4e86-aced-ef47599879c2", APIVersion:"apps/v1", ResourceVersion:"1075", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-4tdz6
error: error validating "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0214 17:31:49.149887   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701502-30436", Name:"nginx1-deployment-7bdbbfb5cf", UID:"e023e516-9fef-4e86-aced-ef47599879c2", APIVersion:"apps/v1", ResourceVersion:"1075", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx1-deployment-7bdbbfb5cf-r6g5g
I0214 17:31:49.153083   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701502-30436", Name:"nginx0-deployment", UID:"6c033858-396d-4ee2-9459-f9c10d1780ec", APIVersion:"apps/v1", ResourceVersion:"1076", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx0-deployment-57c6bff7f6 to 2
I0214 17:31:49.157095   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701502-30436", Name:"nginx0-deployment-57c6bff7f6", UID:"3d90796a-8c2a-49f9-b19b-674718e46604", APIVersion:"apps/v1", ResourceVersion:"1082", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-wcvjs
I0214 17:31:49.164619   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701502-30436", Name:"nginx0-deployment-57c6bff7f6", UID:"3d90796a-8c2a-49f9-b19b-674718e46604", APIVersion:"apps/v1", ResourceVersion:"1082", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx0-deployment-57c6bff7f6-t689l
generic-resources.sh:391: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx0-deployment:nginx1-deployment:
(Bgeneric-resources.sh:392: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(Bgeneric-resources.sh:396: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:k8s.gcr.io/nginx:1.7.9:
(BSuccessful
message:deployment.apps/nginx1-deployment skipped rollback (current template already matches revision 1)
deployment.apps/nginx0-deployment skipped rollback (current template already matches revision 1)
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment paused
deployment.apps/nginx0-deployment paused
generic-resources.sh:404: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: true:true:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
deployment.apps/nginx1-deployment resumed
deployment.apps/nginx0-deployment resumed
generic-resources.sh:410: Successful get deployment {{range.items}}{{.spec.paused}}:{{end}}: <no value>:<no value>:
(BSuccessful
message:unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
E0214 17:31:50.067307   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx0-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:nginx1-deployment
Successful
message:deployment.apps/nginx1-deployment 
REVISION  CHANGE-CAUSE
1         <none>

deployment.apps/nginx0-deployment 
REVISION  CHANGE-CAUSE
1         <none>

error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
has:Object 'Kind' is missing
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
deployment.apps "nginx1-deployment" force deleted
deployment.apps "nginx0-deployment" force deleted
error: unable to decode "hack/testdata/recursive/deployment/deployment/nginx-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"apps/v1","ind":"Deployment","metadata":{"labels":{"app":"nginx2-deployment"},"name":"nginx2-deployment"},"spec":{"replicas":2,"selector":{"matchLabels":{"app":"nginx2"}},"template":{"metadata":{"labels":{"app":"nginx2"}},"spec":{"containers":[{"image":"k8s.gcr.io/nginx:1.7.9","name":"nginx","ports":[{"containerPort":80}]}]}}}}'
generic-resources.sh:426: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicationcontroller/busybox0 created
I0214 17:31:51.473534   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581701502-30436", Name:"busybox0", UID:"b15d3704-e3b1-4e38-a453-e6a0d8ee724c", APIVersion:"v1", ResourceVersion:"1126", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox0-zcdb6
replicationcontroller/busybox1 created
error: error validating "hack/testdata/recursive/rc/rc/busybox-broken.yaml": error validating data: kind not set; if you choose to ignore these errors, turn validation off with --validate=false
I0214 17:31:51.478337   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581701502-30436", Name:"busybox1", UID:"de9079bf-5f75-4180-bd44-e6033fe6e269", APIVersion:"v1", ResourceVersion:"1128", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: busybox1-dr4jc
generic-resources.sh:430: Successful get rc {{range.items}}{{.metadata.name}}:{{end}}: busybox0:busybox1:
(BSuccessful
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
... skipping 2 lines ...
message:no rollbacker has been implemented for "ReplicationController"
no rollbacker has been implemented for "ReplicationController"
unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox0" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" pausing is not supported
error: replicationcontrollers "busybox1" pausing is not supported
has:replicationcontrollers "busybox1" pausing is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:Object 'Kind' is missing
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox0" resuming is not supported
Successful
message:unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
error: replicationcontrollers "busybox0" resuming is not supported
error: replicationcontrollers "busybox1" resuming is not supported
has:replicationcontrollers "busybox1" resuming is not supported
E0214 17:31:51.942284   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
replicationcontroller "busybox0" force deleted
replicationcontroller "busybox1" force deleted
error: unable to decode "hack/testdata/recursive/rc/rc/busybox-broken.yaml": Object 'Kind' is missing in '{"apiVersion":"v1","ind":"ReplicationController","metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"replicas":1,"selector":{"app":"busybox2"},"template":{"metadata":{"labels":{"app":"busybox2"},"name":"busybox2"},"spec":{"containers":[{"command":["sleep","3600"],"image":"busybox","imagePullPolicy":"IfNotPresent","name":"busybox"}],"restartPolicy":"Always"}}}}'
Recording: run_namespace_tests
Running command: run_namespace_tests

+++ Running case: test-cmd.run_namespace_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_namespace_tests
+++ [0214 17:31:53] Testing kubectl(v1:namespaces)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created (dry run)
namespace/my-namespace created (server dry run)
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1384: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(Bnamespace "my-namespace" deleted
E0214 17:31:58.268891   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 17:31:58.547609   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace/my-namespace condition met
Successful
message:Error from server (NotFound): namespaces "my-namespace" not found
has: not found
namespace/my-namespace created
core.sh:1393: Successful get namespaces/my-namespace {{.metadata.name}}: my-namespace
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
... skipping 28 lines ...
namespace "namespace-1581701468-21566" deleted
namespace "namespace-1581701468-9090" deleted
namespace "namespace-1581701469-30865" deleted
namespace "namespace-1581701471-32473" deleted
namespace "namespace-1581701473-24443" deleted
namespace "namespace-1581701502-30436" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:warning: deleting cluster-scoped resources
Successful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
namespace "kube-node-lease" deleted
namespace "my-namespace" deleted
namespace "namespace-1581701366-4752" deleted
... skipping 26 lines ...
namespace "namespace-1581701468-21566" deleted
namespace "namespace-1581701468-9090" deleted
namespace "namespace-1581701469-30865" deleted
namespace "namespace-1581701471-32473" deleted
namespace "namespace-1581701473-24443" deleted
namespace "namespace-1581701502-30436" deleted
Error from server (Forbidden): namespaces "default" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-public" is forbidden: this namespace may not be deleted
Error from server (Forbidden): namespaces "kube-system" is forbidden: this namespace may not be deleted
has:namespace "my-namespace" deleted
namespace/quotas created
I0214 17:31:59.465106   55211 shared_informer.go:206] Waiting for caches to sync for resource quota
I0214 17:31:59.465160   55211 shared_informer.go:213] Caches are synced for resource quota 
core.sh:1400: Successful get namespaces/quotas {{.metadata.name}}: quotas
(Bcore.sh:1401: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created (dry run)
resourcequota/test-quota created (server dry run)
I0214 17:31:59.973964   55211 shared_informer.go:206] Waiting for caches to sync for garbage collector
I0214 17:31:59.974020   55211 shared_informer.go:213] Caches are synced for garbage collector 
core.sh:1405: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: :
(Bresourcequota/test-quota created
E0214 17:32:00.112978   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1408: Successful get quota --namespace=quotas {{range.items}}{{ if eq .metadata.name \"test-quota\" }}found{{end}}{{end}}:: found:
(Bresourcequota "test-quota" deleted
I0214 17:32:00.234264   55211 resource_quota_controller.go:306] Resource quota has been deleted quotas/test-quota
namespace "quotas" deleted
I0214 17:32:02.024632   55211 horizontal.go:354] Horizontal Pod Autoscaler busybox0 has been deleted in namespace-1581701502-30436
I0214 17:32:02.027452   55211 horizontal.go:354] Horizontal Pod Autoscaler busybox1 has been deleted in namespace-1581701502-30436
E0214 17:32:02.066224   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
core.sh:1420: Successful get namespaces {{range.items}}{{ if eq .metadata.name \"other\" }}found{{end}}{{end}}:: :
(Bnamespace/other created
core.sh:1424: Successful get namespaces/other {{.metadata.name}}: other
(Bcore.sh:1428: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpod/valid-pod created
core.sh:1432: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bcore.sh:1434: Successful get pods -n other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(BSuccessful
message:error: a resource cannot be retrieved by name across all namespaces
has:a resource cannot be retrieved by name across all namespaces
core.sh:1441: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
core.sh:1445: Successful get pods --namespace=other {{range.items}}{{.metadata.name}}:{{end}}: 
(Bnamespace "other" deleted
... skipping 37 lines ...
Recording: run_secrets_test
Running command: run_secrets_test

+++ Running case: test-cmd.run_secrets_test 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_secrets_test
E0214 17:32:11.818313   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ [0214 17:32:11] Creating namespace namespace-1581701531-18286
namespace/namespace-1581701531-18286 created
Context "test" modified.
+++ [0214 17:32:11] Testing secrets
W0214 17:32:12.035149   72783 helpers.go:534] --dry-run is deprecated and can be replaced with --dry-run=client.
I0214 17:32:12.036240   72783 loader.go:375] Config loaded from file:  /tmp/tmp.q8V4h0HSIr/.kube/config
... skipping 61 lines ...
(Bcore.sh:845: Successful get secret/secret-string-data --namespace=test-secrets  {{.stringData}}: <no value>
(Bsecret "secret-string-data" deleted
core.sh:854: Successful get secrets --namespace=test-secrets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bsecret "test-secret" deleted
namespace "test-secrets" deleted
I0214 17:32:16.710932   55211 namespace_controller.go:185] Namespace has been deleted other
E0214 17:32:16.896737   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
E0214 17:32:18.560235   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_configmap_tests
Running command: run_configmap_tests

+++ Running case: test-cmd.run_configmap_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
... skipping 18 lines ...
core.sh:51: Successful get configmap/test-configmap --namespace=test-configmaps {{.metadata.name}}: test-configmap
(Bcore.sh:52: Successful get configmap/test-binary-configmap --namespace=test-configmaps {{.metadata.name}}: test-binary-configmap
(Bconfigmap "test-configmap" deleted
configmap "test-binary-configmap" deleted
namespace "test-configmaps" deleted
I0214 17:32:25.100806   55211 namespace_controller.go:185] Namespace has been deleted test-secrets
E0214 17:32:25.736697   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
+++ exit code: 0
Recording: run_client_config_tests
Running command: run_client_config_tests

+++ Running case: test-cmd.run_client_config_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_client_config_tests
+++ [0214 17:32:27] Creating namespace namespace-1581701547-3533
namespace/namespace-1581701547-3533 created
Context "test" modified.
+++ [0214 17:32:27] Testing client config
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:error: stat missing: no such file or directory
has:missing: no such file or directory
Successful
message:Error in configuration: context was not found for specified context: missing-context
has:context was not found for specified context: missing-context
Successful
message:error: no server found for cluster "missing-cluster"
has:no server found for cluster "missing-cluster"
Successful
message:error: auth info "missing-user" does not exist
has:auth info "missing-user" does not exist
Successful
message:error: error loading config file "/tmp/newconfig.yaml": no kind "Config" is registered for version "v-1" in scheme "k8s.io/client-go/tools/clientcmd/api/latest/latest.go:50"
has:error loading config file
Successful
message:error: stat missing-config: no such file or directory
has:no such file or directory
+++ exit code: 0
Recording: run_service_accounts_tests
Running command: run_service_accounts_tests

+++ Running case: test-cmd.run_service_accounts_tests 
... skipping 43 lines ...
Labels:                        <none>
Annotations:                   <none>
Schedule:                      59 23 31 2 *
Concurrency Policy:            Allow
Suspend:                       False
Successful Job History Limit:  3
Failed Job History Limit:      1
Starting Deadline Seconds:     <unset>
Selector:                      <unset>
Parallelism:                   <unset>
Completions:                   <unset>
Pod Template:
  Labels:  <none>
... skipping 38 lines ...
                job-name=test-job
Annotations:    cronjob.kubernetes.io/instantiate: manual
Controlled By:  CronJob/pi
Parallelism:    1
Completions:    1
Start Time:     Fri, 14 Feb 2020 17:32:36 +0000
Pods Statuses:  1 Running / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  controller-uid=51ba1119-ce1f-446f-92c9-b5da35984d5b
           job-name=test-job
  Containers:
   pi:
    Image:      k8s.gcr.io/perl
... skipping 365 lines ...
  selector:
    role: padawan
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:952: Successful get services redis-master {{range.spec.selector}}{{.}}:{{end}}: redis:master:backend:
(BI0214 17:32:47.264683   55211 namespace_controller.go:185] Namespace has been deleted test-jobs
service/redis-master selector updated
Successful
message:Error from server (Conflict): Operation cannot be fulfilled on services "redis-master": the object has been modified; please apply your changes to the latest version and try again
has:Conflict
core.sh:965: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bservice "redis-master" deleted
core.sh:972: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bcore.sh:976: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:
(Bservice/redis-master created
core.sh:980: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(Bcore.sh:984: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:
(BE0214 17:32:48.589653   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
service/service-v1-test created
core.sh:1005: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
(Bservice/service-v1-test replaced
core.sh:1012: Successful get services {{range.items}}{{.metadata.name}}:{{end}}: kubernetes:redis-master:service-v1-test:
(Bservice "redis-master" deleted
service "service-v1-test" deleted
... skipping 67 lines ...
(Bdaemonset.apps/bind restarted
apps.sh:48: Successful get daemonsets bind {{.metadata.generation}}: 5
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_daemonset_history_tests
Running command: run_daemonset_history_tests
E0214 17:32:54.943636   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource

+++ Running case: test-cmd.run_daemonset_history_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_daemonset_history_tests
+++ [0214 17:32:54] Creating namespace namespace-1581701574-8189
namespace/namespace-1581701574-8189 created
Context "test" modified.
+++ [0214 17:32:55] Testing kubectl(v1:daemonsets, v1:controllerrevisions)
apps.sh:66: Successful get daemonsets {{range.items}}{{.metadata.name}}:{{end}}: 
(Bdaemonset.apps/bind created
apps.sh:70: Successful get controllerrevisions {{range.items}}{{.metadata.annotations}}:{{end}}: map[deprecated.daemonset.template.generation:1 kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{"kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"},"labels":{"service":"bind"},"name":"bind","namespace":"namespace-1581701574-8189"},"spec":{"selector":{"matchLabels":{"service":"bind"}},"template":{"metadata":{"labels":{"service":"bind"}},"spec":{"affinity":{"podAntiAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":[{"labelSelector":{"matchExpressions":[{"key":"service","operator":"In","values":["bind"]}]},"namespaces":[],"topologyKey":"kubernetes.io/hostname"}]}},"containers":[{"image":"k8s.gcr.io/pause:2.0","name":"kubernetes-pause"}]}},"updateStrategy":{"rollingUpdate":{"maxUnavailable":"10%"},"type":"RollingUpdate"}}}
 kubernetes.io/change-cause:kubectl apply --filename=hack/testdata/rollingupdate-daemonset.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true]:
(BE0214 17:32:55.604014   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
daemonset.apps/bind skipped rollback (current template already matches revision 1)
apps.sh:73: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:74: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind configured
apps.sh:77: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:78: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
... skipping 14 lines ...
 (dry run)
daemonset.apps/bind rolled back (server dry run)
apps.sh:84: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:85: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:86: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps/bind rolled back
E0214 17:32:57.139449   55211 daemon_controller.go:292] namespace-1581701574-8189/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1581701574-8189", SelfLink:"/apis/apps/v1/namespaces/namespace-1581701574-8189/daemonsets/bind", UID:"bfd677ce-b47b-4dfb-b435-48088a8ae106", ResourceVersion:"1642", Generation:3, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717298375, loc:(*time.Location)(0x6c69560)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"3", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1581701574-8189\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001f00280), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:2.0", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002401158), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00227cf00), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001f002c0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc001146020)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0024011ac)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:2, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:89: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:90: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:94: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(BE0214 17:32:57.593606   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:95: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bdaemonset.apps/bind rolled back
E0214 17:32:57.769928   55211 daemon_controller.go:292] namespace-1581701574-8189/bind failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"bind", GenerateName:"", Namespace:"namespace-1581701574-8189", SelfLink:"/apis/apps/v1/namespaces/namespace-1581701574-8189/daemonsets/bind", UID:"bfd677ce-b47b-4dfb-b435-48088a8ae106", ResourceVersion:"1647", Generation:4, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717298375, loc:(*time.Location)(0x6c69560)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"4", "kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{\"kubernetes.io/change-cause\":\"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true\"},\"labels\":{\"service\":\"bind\"},\"name\":\"bind\",\"namespace\":\"namespace-1581701574-8189\"},\"spec\":{\"selector\":{\"matchLabels\":{\"service\":\"bind\"}},\"template\":{\"metadata\":{\"labels\":{\"service\":\"bind\"}},\"spec\":{\"affinity\":{\"podAntiAffinity\":{\"requiredDuringSchedulingIgnoredDuringExecution\":[{\"labelSelector\":{\"matchExpressions\":[{\"key\":\"service\",\"operator\":\"In\",\"values\":[\"bind\"]}]},\"namespaces\":[],\"topologyKey\":\"kubernetes.io/hostname\"}]}},\"containers\":[{\"image\":\"k8s.gcr.io/pause:latest\",\"name\":\"kubernetes-pause\"},{\"image\":\"k8s.gcr.io/nginx:test-cmd\",\"name\":\"app\"}]}},\"updateStrategy\":{\"rollingUpdate\":{\"maxUnavailable\":\"10%\"},\"type\":\"RollingUpdate\"}}}\n", "kubernetes.io/change-cause":"kubectl apply --filename=hack/testdata/rollingupdate-daemonset-rv2.yaml --record=true --server=http://127.0.0.1:8080 --match-server-version=true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001018e40), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"service":"bind"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume(nil), InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kubernetes-pause", Image:"k8s.gcr.io/pause:latest", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"app", Image:"k8s.gcr.io/nginx:test-cmd", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount(nil), VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002ca65c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"", DeprecatedServiceAccount:"", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001546540), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(0xc001018fc0), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration(nil), HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e508)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc002ca661c)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:3, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "bind": the object has been modified; please apply your changes to the latest version and try again
apps.sh:98: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/pause:latest:
(Bapps.sh:99: Successful get daemonset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:100: Successful get daemonset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bdaemonset.apps "bind" deleted
+++ exit code: 0
Recording: run_rc_tests
... skipping 32 lines ...
Namespace:    namespace-1581701578-30182
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1581701578-30182
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 18 lines ...
Namespace:    namespace-1581701578-30182
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 12 lines ...
Namespace:    namespace-1581701578-30182
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 27 lines ...
Namespace:    namespace-1581701578-30182
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1581701578-30182
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 17 lines ...
Namespace:    namespace-1581701578-30182
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 11 lines ...
Namespace:    namespace-1581701578-30182
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v4
... skipping 15 lines ...
(Bcore.sh:1150: Successful get rc frontend {{.spec.replicas}}: 3
(Breplicationcontroller/frontend scaled
E0214 17:33:00.511558   55211 replica_set.go:200] ReplicaSet has no controller: &ReplicaSet{ObjectMeta:{frontend  namespace-1581701578-30182 /api/v1/namespaces/namespace-1581701578-30182/replicationcontrollers/frontend c3bb760e-7f7c-4101-8748-2d182b685b5b 1683 2 2020-02-14 17:32:59 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{app: guestbook,tier: frontend,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[app:guestbook tier:frontend] map[] [] []  []} {[] [] [{php-redis gcr.io/google_samples/gb-frontend:v4 [] []  [{ 0 80 TCP }] [] [{GET_HOSTS_FROM dns nil}] {map[] map[cpu:{{100 -3} {<nil>} 100m DecimalSI} memory:{{104857600 0} {<nil>} 100Mi BinarySI}]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002fd83e8 <nil> ClusterFirst map[]   <nil>  false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:3,FullyLabeledReplicas:3,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0214 17:33:00.519991   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581701578-30182", Name:"frontend", UID:"c3bb760e-7f7c-4101-8748-2d182b685b5b", APIVersion:"v1", ResourceVersion:"1683", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: frontend-2lk69
core.sh:1154: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1158: Successful get rc frontend {{.spec.replicas}}: 2
(Berror: Expected replicas to be 3, was 2
core.sh:1162: Successful get rc frontend {{.spec.replicas}}: 2
(Bcore.sh:1166: Successful get rc frontend {{.spec.replicas}}: 2
(Breplicationcontroller/frontend scaled
I0214 17:33:01.087559   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581701578-30182", Name:"frontend", UID:"c3bb760e-7f7c-4101-8748-2d182b685b5b", APIVersion:"v1", ResourceVersion:"1689", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cmk9b
core.sh:1170: Successful get rc frontend {{.spec.replicas}}: 3
(Bcore.sh:1174: Successful get rc frontend {{.spec.replicas}}: 3
... skipping 31 lines ...
(Bdeployment.apps "nginx-deployment" deleted
Successful
message:service/expose-test-deployment exposed
has:service/expose-test-deployment exposed
service "expose-test-deployment" deleted
Successful
message:error: couldn't retrieve selectors via --selector flag or introspection: invalid deployment: no selectors, therefore cannot be exposed
See 'kubectl expose -h' for help and examples
has:invalid deployment: no selectors
deployment.apps/nginx-deployment created
I0214 17:33:03.262917   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701578-30182", Name:"nginx-deployment", UID:"d9519911-7e71-42b5-941b-5aec04e70809", APIVersion:"apps/v1", ResourceVersion:"1794", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6986c7bc94 to 3
I0214 17:33:03.273275   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701578-30182", Name:"nginx-deployment-6986c7bc94", UID:"b2a80f32-80ff-45c8-bea0-b1bee5637043", APIVersion:"apps/v1", ResourceVersion:"1795", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-klnjq
I0214 17:33:03.278291   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701578-30182", Name:"nginx-deployment-6986c7bc94", UID:"b2a80f32-80ff-45c8-bea0-b1bee5637043", APIVersion:"apps/v1", ResourceVersion:"1795", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6986c7bc94-7ttbg
... skipping 23 lines ...
service "frontend" deleted
service "frontend-2" deleted
service "frontend-3" deleted
service "frontend-4" deleted
service "frontend-5" deleted
Successful
message:error: cannot expose a Node
has:cannot expose
Successful
message:The Service "invalid-large-service-name-that-has-more-than-sixty-three-characters" is invalid: metadata.name: Invalid value: "invalid-large-service-name-that-has-more-than-sixty-three-characters": must be no more than 63 characters
has:metadata.name: Invalid value
Successful
message:service/kubernetes-serve-hostname-testing-sixty-three-characters-in-len exposed
... skipping 30 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1317: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
core.sh:1321: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicationcontroller "frontend" deleted
core.sh:1330: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: 
(BapiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
... skipping 25 lines ...
            cpu: 300m
          requests:
            cpu: 300m
      terminationGracePeriodSeconds: 0
status: {}
W0214 17:33:09.945377   78954 helpers.go:534] --dry-run is deprecated and can be replaced with --dry-run=client.
Error from server (NotFound): deployments.apps "nginx-deployment-resources" not found
deployment.apps/nginx-deployment-resources created
I0214 17:33:10.209384   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701578-30182", Name:"nginx-deployment-resources", UID:"12995d31-4d52-4f5a-af69-705349964293", APIVersion:"apps/v1", ResourceVersion:"1964", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-67f8cfff5 to 3
I0214 17:33:10.220450   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701578-30182", Name:"nginx-deployment-resources-67f8cfff5", UID:"25c38f8f-229b-4194-a067-18b356762525", APIVersion:"apps/v1", ResourceVersion:"1965", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-ckh4z
I0214 17:33:10.222923   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701578-30182", Name:"nginx-deployment-resources-67f8cfff5", UID:"25c38f8f-229b-4194-a067-18b356762525", APIVersion:"apps/v1", ResourceVersion:"1965", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-hq5g5
I0214 17:33:10.226639   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701578-30182", Name:"nginx-deployment-resources-67f8cfff5", UID:"25c38f8f-229b-4194-a067-18b356762525", APIVersion:"apps/v1", ResourceVersion:"1965", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-67f8cfff5-jn47h
core.sh:1336: Successful get deployment {{range.items}}{{.metadata.name}}:{{end}}: nginx-deployment-resources:
(Bcore.sh:1337: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bcore.sh:1338: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment-resources resource requirements updated
I0214 17:33:10.688136   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701578-30182", Name:"nginx-deployment-resources", UID:"12995d31-4d52-4f5a-af69-705349964293", APIVersion:"apps/v1", ResourceVersion:"1978", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-55c547f795 to 1
I0214 17:33:10.692972   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701578-30182", Name:"nginx-deployment-resources-55c547f795", UID:"45ea63e8-e08d-4010-84c1-177118f9f708", APIVersion:"apps/v1", ResourceVersion:"1979", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-55c547f795-7bqzt
core.sh:1341: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 100m:
(Bcore.sh:1342: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 100m:
(Berror: unable to find container named redis
deployment.apps/nginx-deployment-resources resource requirements updated
I0214 17:33:11.176162   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701578-30182", Name:"nginx-deployment-resources", UID:"12995d31-4d52-4f5a-af69-705349964293", APIVersion:"apps/v1", ResourceVersion:"1988", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-resources-55c547f795 to 0
I0214 17:33:11.183681   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701578-30182", Name:"nginx-deployment-resources", UID:"12995d31-4d52-4f5a-af69-705349964293", APIVersion:"apps/v1", ResourceVersion:"1990", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-resources-6d86564b45 to 1
I0214 17:33:11.184241   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701578-30182", Name:"nginx-deployment-resources-55c547f795", UID:"45ea63e8-e08d-4010-84c1-177118f9f708", APIVersion:"apps/v1", ResourceVersion:"1992", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-resources-55c547f795-7bqzt
I0214 17:33:11.189349   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701578-30182", Name:"nginx-deployment-resources-6d86564b45", UID:"c5598143-ee4f-460a-a007-2ab8a499d239", APIVersion:"apps/v1", ResourceVersion:"1995", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-resources-6d86564b45-ckkq4
core.sh:1347: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
... skipping 81 lines ...
    status: "True"
    type: Progressing
  observedGeneration: 4
  replicas: 4
  unavailableReplicas: 4
  updatedReplicas: 1
error: you must specify resources by --filename when --local is set.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
core.sh:1357: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).resources.limits.cpu}}:{{end}}: 200m:
(Bcore.sh:1358: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.limits.cpu}}:{{end}}: 300m:
(Bcore.sh:1359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).resources.requests.cpu}}:{{end}}: 300m:
... skipping 46 lines ...
                pod-template-hash=79b9bd9585
Annotations:    deployment.kubernetes.io/desired-replicas: 1
                deployment.kubernetes.io/max-replicas: 2
                deployment.kubernetes.io/revision: 1
Controlled By:  Deployment/test-nginx-apps
Replicas:       1 current / 1 desired
Pods Status:    0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=test-nginx-apps
           pod-template-hash=79b9bd9585
  Containers:
   nginx:
    Image:        k8s.gcr.io/nginx:test-cmd
... skipping 102 lines ...
apps.sh:301: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(B    Image:	k8s.gcr.io/nginx:test-cmd
deployment.apps/nginx rolled back (server dry run)
apps.sh:305: Successful get deployment.apps {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx rolled back
apps.sh:309: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Berror: unable to find specified revision 1000000 in history
apps.sh:312: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bdeployment.apps/nginx rolled back
apps.sh:316: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bdeployment.apps/nginx paused
error: you cannot rollback a paused deployment; resume it first with 'kubectl rollout resume deployment/nginx' and try again
I0214 17:33:23.899534   55211 horizontal.go:354] Horizontal Pod Autoscaler frontend has been deleted in namespace-1581701578-30182
error: deployments.apps "nginx" can't restart paused deployment (run rollout resume first)
deployment.apps/nginx resumed
deployment.apps/nginx rolled back
    deployment.kubernetes.io/revision-history: 1,3
error: desired revision (3) is different from the running revision (5)
deployment.apps/nginx restarted
I0214 17:33:24.717671   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701592-14730", Name:"nginx", UID:"c4fa9992-7953-4e8a-9b70-a3adadf1643d", APIVersion:"apps/v1", ResourceVersion:"2220", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-f87d999f7 to 2
I0214 17:33:24.724568   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701592-14730", Name:"nginx-f87d999f7", UID:"d05663eb-9cb6-405e-851f-d8f5e06e453f", APIVersion:"apps/v1", ResourceVersion:"2224", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-f87d999f7-dk8fv
I0214 17:33:24.726727   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701592-14730", Name:"nginx", UID:"c4fa9992-7953-4e8a-9b70-a3adadf1643d", APIVersion:"apps/v1", ResourceVersion:"2222", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-6ccfb976f to 1
I0214 17:33:24.730128   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701592-14730", Name:"nginx-6ccfb976f", UID:"8e0cc8d1-e003-4351-94f2-26689b1d1111", APIVersion:"apps/v1", ResourceVersion:"2228", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-6ccfb976f-9kwv8
Successful
... skipping 78 lines ...
(Bapps.sh:356: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
I0214 17:33:27.801659   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701592-14730", Name:"nginx-deployment", UID:"954a2b3e-f332-48c5-ad5d-873ff8c63044", APIVersion:"apps/v1", ResourceVersion:"2291", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-59df9b5f5b to 1
I0214 17:33:27.807389   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701592-14730", Name:"nginx-deployment-59df9b5f5b", UID:"abe32c4b-6ed8-4dc3-9843-391910e50b7e", APIVersion:"apps/v1", ResourceVersion:"2292", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-59df9b5f5b-4rck6
apps.sh:359: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:360: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Berror: unable to find container named "redis"
deployment.apps/nginx-deployment image updated
apps.sh:365: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:test-cmd:
(Bapps.sh:366: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
(Bdeployment.apps/nginx-deployment image updated
apps.sh:369: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx:1.7.9:
(Bapps.sh:370: Successful get deployment {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/perl:
... skipping 23 lines ...
(Bdeployment.apps/nginx-deployment env updated
I0214 17:33:31.410003   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701592-14730", Name:"nginx-deployment", UID:"4b5efc79-9743-48c2-8fa9-2655881eec54", APIVersion:"apps/v1", ResourceVersion:"2363", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-6b9f7756b4 to 1
I0214 17:33:31.415302   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701592-14730", Name:"nginx-deployment-6b9f7756b4", UID:"3ec03881-a3ec-4705-83bb-38f540eadeea", APIVersion:"apps/v1", ResourceVersion:"2364", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-6b9f7756b4-m4zns
apps.sh:400: Successful get deploy nginx-deployment {{ (index (index .spec.template.spec.containers 0).env 0).name}}: KEY_2
(Bapps.sh:402: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1
(Bdeployment.apps/nginx-deployment env updated (dry run)
E0214 17:33:31.808768   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
deployment.apps/nginx-deployment env updated (server dry run)
apps.sh:406: Successful get deploy nginx-deployment {{ len (index .spec.template.spec.containers 0).env }}: 1
(Bdeployment.apps/nginx-deployment env updated
I0214 17:33:32.248515   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701592-14730", Name:"nginx-deployment", UID:"4b5efc79-9743-48c2-8fa9-2655881eec54", APIVersion:"apps/v1", ResourceVersion:"2374", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set nginx-deployment-598d4d68b4 to 2
I0214 17:33:32.256012   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701592-14730", Name:"nginx-deployment", UID:"4b5efc79-9743-48c2-8fa9-2655881eec54", APIVersion:"apps/v1", ResourceVersion:"2376", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-754bf964c8 to 1
I0214 17:33:32.262237   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701592-14730", Name:"nginx-deployment-754bf964c8", UID:"cf123ed5-a51d-44e2-a2e2-31bdc376c63a", APIVersion:"apps/v1", ResourceVersion:"2380", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-754bf964c8-78tg4
... skipping 14 lines ...
I0214 17:33:32.771819   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701592-14730", Name:"nginx-deployment-6b9f7756b4", UID:"3ec03881-a3ec-4705-83bb-38f540eadeea", APIVersion:"apps/v1", ResourceVersion:"2439", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: nginx-deployment-6b9f7756b4-m4zns
deployment.apps/nginx-deployment env updated
I0214 17:33:32.891351   55211 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"namespace-1581701592-14730", Name:"nginx-deployment", UID:"4b5efc79-9743-48c2-8fa9-2655881eec54", APIVersion:"apps/v1", ResourceVersion:"2438", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set nginx-deployment-98b7fd455 to 1
I0214 17:33:32.945833   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701592-14730", Name:"nginx-deployment-98b7fd455", UID:"3c7f7317-e318-479d-b845-a3eebbb84269", APIVersion:"apps/v1", ResourceVersion:"2446", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: nginx-deployment-98b7fd455-s7szr
deployment.apps/nginx-deployment env updated
deployment.apps "nginx-deployment" deleted
E0214 17:33:33.145687   55211 replica_set.go:535] sync "namespace-1581701592-14730/nginx-deployment-6b9f7756b4" failed with replicasets.apps "nginx-deployment-6b9f7756b4" not found
configmap "test-set-env-config" deleted
secret "test-set-env-secret" deleted
+++ exit code: 0
E0214 17:33:33.344434   55211 replica_set.go:535] sync "namespace-1581701592-14730/nginx-deployment-d74969475" failed with replicasets.apps "nginx-deployment-d74969475" not found
Recording: run_rs_tests
Running command: run_rs_tests
E0214 17:33:33.394757   55211 replica_set.go:535] sync "namespace-1581701592-14730/nginx-deployment-98b7fd455" failed with replicasets.apps "nginx-deployment-98b7fd455" not found

+++ Running case: test-cmd.run_rs_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_rs_tests
+++ [0214 17:33:33] Creating namespace namespace-1581701613-18287
E0214 17:33:33.444629   55211 replica_set.go:535] sync "namespace-1581701592-14730/nginx-deployment-868b664cb5" failed with replicasets.apps "nginx-deployment-868b664cb5" not found
namespace/namespace-1581701613-18287 created
Context "test" modified.
+++ [0214 17:33:33] Testing kubectl(v1:replicasets)
apps.sh:533: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0214 17:33:33.794156   55211 horizontal.go:354] Horizontal Pod Autoscaler nginx-deployment has been deleted in namespace-1581701592-14730
replicaset.apps/frontend created
I0214 17:33:33.928832   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701613-18287", Name:"frontend", UID:"1cf6d67a-200a-47f0-8461-6ae999605c26", APIVersion:"apps/v1", ResourceVersion:"2478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-xxmvw
I0214 17:33:33.934009   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701613-18287", Name:"frontend", UID:"1cf6d67a-200a-47f0-8461-6ae999605c26", APIVersion:"apps/v1", ResourceVersion:"2478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-645zj
I0214 17:33:33.934152   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701613-18287", Name:"frontend", UID:"1cf6d67a-200a-47f0-8461-6ae999605c26", APIVersion:"apps/v1", ResourceVersion:"2478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-p65vv
+++ [0214 17:33:33] Deleting rs
E0214 17:33:33.994885   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
replicaset.apps "frontend" deleted
apps.sh:539: Successful get pods -l "tier=frontend" {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:543: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Breplicaset.apps/frontend created
I0214 17:33:34.485624   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701613-18287", Name:"frontend", UID:"47f53e93-ff74-4ae3-a570-96e8803561b4", APIVersion:"apps/v1", ResourceVersion:"2494", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-x7sjx
I0214 17:33:34.488381   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701613-18287", Name:"frontend", UID:"47f53e93-ff74-4ae3-a570-96e8803561b4", APIVersion:"apps/v1", ResourceVersion:"2494", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-hqbxx
I0214 17:33:34.489700   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701613-18287", Name:"frontend", UID:"47f53e93-ff74-4ae3-a570-96e8803561b4", APIVersion:"apps/v1", ResourceVersion:"2494", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-cqjj4
apps.sh:547: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(B+++ [0214 17:33:34] Deleting rs
replicaset.apps "frontend" deleted
E0214 17:33:34.794745   55211 replica_set.go:535] sync "namespace-1581701613-18287/frontend" failed with replicasets.apps "frontend" not found
apps.sh:551: Successful get rs {{range.items}}{{.metadata.name}}:{{end}}: 
(Bapps.sh:553: Successful get pods -l "tier=frontend" {{range.items}}{{(index .spec.containers 0).name}}:{{end}}: php-redis:php-redis:php-redis:
(Bpod "frontend-cqjj4" deleted
pod "frontend-hqbxx" deleted
pod "frontend-x7sjx" deleted
apps.sh:556: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 15 lines ...
Namespace:    namespace-1581701613-18287
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1581701613-18287
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 18 lines ...
Namespace:    namespace-1581701613-18287
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 12 lines ...
Namespace:    namespace-1581701613-18287
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 25 lines ...
Namespace:    namespace-1581701613-18287
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1581701613-18287
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 17 lines ...
Namespace:    namespace-1581701613-18287
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 11 lines ...
Namespace:    namespace-1581701613-18287
Selector:     app=guestbook,tier=frontend
Labels:       app=guestbook
              tier=frontend
Annotations:  <none>
Replicas:     3 current / 3 desired
Pods Status:  0 Running / 3 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=guestbook
           tier=frontend
  Containers:
   php-redis:
    Image:      gcr.io/google_samples/gb-frontend:v3
... skipping 198 lines ...
(Bhorizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:680: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 1 2 70
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
horizontalpodautoscaler.autoscaling/frontend autoscaled
apps.sh:684: Successful get hpa frontend {{.spec.minReplicas}} {{.spec.maxReplicas}} {{.spec.targetCPUUtilizationPercentage}}: 2 3 80
(Bhorizontalpodautoscaler.autoscaling "frontend" deleted
Error: required flag(s) "max" not set
replicaset.apps "frontend" deleted
+++ exit code: 0
Recording: run_stateful_set_tests
Running command: run_stateful_set_tests

+++ Running case: test-cmd.run_stateful_set_tests 
... skipping 4 lines ...
Context "test" modified.
+++ [0214 17:33:45] Testing kubectl(v1:statefulsets)
apps.sh:492: Successful get statefulset {{range.items}}{{.metadata.name}}:{{end}}: 
(BI0214 17:33:46.388018   51768 controller.go:606] quota admission added evaluator for: statefulsets.apps
statefulset.apps/nginx created
apps.sh:498: Successful get statefulset nginx {{.spec.replicas}}: 0
(BE0214 17:33:46.578498   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
apps.sh:499: Successful get statefulset nginx {{.status.observedGeneration}}: 1
(Bstatefulset.apps/nginx scaled
I0214 17:33:46.865578   55211 event.go:278] Event(v1.ObjectReference{Kind:"StatefulSet", Namespace:"namespace-1581701625-29267", Name:"nginx", UID:"1ff7cbf7-0456-4d11-a2d5-03b67329eeb2", APIVersion:"apps/v1", ResourceVersion:"2751", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' create Pod nginx-0 in StatefulSet nginx successful
apps.sh:503: Successful get statefulset nginx {{.spec.replicas}}: 1
(Bapps.sh:504: Successful get statefulset nginx {{.status.observedGeneration}}: 2
(BE0214 17:33:47.280722   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
statefulset.apps/nginx restarted
apps.sh:512: Successful get statefulset nginx {{.status.observedGeneration}}: 3
(Bstatefulset.apps "nginx" deleted
I0214 17:33:47.690363   55211 stateful_set.go:419] StatefulSet has been deleted namespace-1581701625-29267/nginx
+++ exit code: 0
Recording: run_statefulset_history_tests
... skipping 40 lines ...
(Bapps.sh:458: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
(Bapps.sh:459: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 2
(Bstatefulset.apps/nginx rolled back
apps.sh:462: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:463: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(BSuccessful
message:error: unable to find specified revision 1000000 in history
has:unable to find specified revision
apps.sh:467: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.7:
(Bapps.sh:468: Successful get statefulset {{range.items}}{{(len .spec.template.spec.containers)}}{{end}}: 1
(Bstatefulset.apps/nginx rolled back
apps.sh:471: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 0).image}}:{{end}}: k8s.gcr.io/nginx-slim:0.8:
(Bapps.sh:472: Successful get statefulset {{range.items}}{{(index .spec.template.spec.containers 1).image}}:{{end}}: k8s.gcr.io/pause:2.0:
... skipping 58 lines ...
Name:         mock
Namespace:    namespace-1581701633-7170
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 56 lines ...
Name:         mock
Namespace:    namespace-1581701633-7170
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 57 lines ...
Name:         mock
Namespace:    namespace-1581701633-7170
Selector:     app=mock
Labels:       app=mock
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 41 lines ...
Namespace:    namespace-1581701633-7170
Selector:     app=mock
Labels:       app=mock
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 11 lines ...
Namespace:    namespace-1581701633-7170
Selector:     app=mock2
Labels:       app=mock2
              status=replaced
Annotations:  <none>
Replicas:     1 current / 1 desired
Pods Status:  0 Running / 1 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=mock2
  Containers:
   mock-container:
    Image:        k8s.gcr.io/pause:2.0
    Port:         9949/TCP
... skipping 103 lines ...
+++ [0214 17:34:08] Creating namespace namespace-1581701648-3376
namespace/namespace-1581701648-3376 created
Context "test" modified.
+++ [0214 17:34:08] Testing persistent volumes
storage.sh:30: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0214 17:34:09.286991   55211 pv_protection_controller.go:118] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:33: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(Bpersistentvolume "pv0001" deleted
persistentvolume/pv0002 created
storage.sh:36: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0002:
(Bpersistentvolume "pv0002" deleted
persistentvolume/pv0003 created
storage.sh:39: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0003:
(Bpersistentvolume "pv0003" deleted
storage.sh:42: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: 
(Bpersistentvolume/pv0001 created
E0214 17:34:10.513014   55211 pv_protection_controller.go:118] PV pv0001 failed with : Operation cannot be fulfilled on persistentvolumes "pv0001": the object has been modified; please apply your changes to the latest version and try again
storage.sh:45: Successful get pv {{range.items}}{{.metadata.name}}:{{end}}: pv0001:
(BSuccessful
message:warning: deleting cluster-scoped resources, not scoped to the provided namespace
persistentvolume "pv0001" deleted
has:warning: deleting cluster-scoped resources
Successful
... skipping 505 lines ...
  "status": {
    "allowed": true,
    "reason": "RBAC: allowed by ClusterRoleBinding \"super-group\" of ClusterRole \"admin\" to Group \"the-group\""
  }
}
+++ exit code: 0
E0214 17:34:15.465638   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:yes
has:yes
Successful
message:yes
has:yes
... skipping 2 lines ...
yes
has:the server doesn't have a resource type
Successful
message:yes
has:yes
Successful
message:error: --subresource can not be used with NonResourceURL
has:subresource can not be used with NonResourceURL
Successful
Successful
message:yes
0
has:0
... skipping 39 lines ...
		{Verbs:[get list watch] APIGroups:[] Resources:[configmaps] ResourceNames:[] NonResourceURLs:[]}
legacy-script.sh:812: Successful get rolebindings -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-RB:
(Blegacy-script.sh:813: Successful get roles -n some-other-random -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-R:
(Blegacy-script.sh:814: Successful get clusterrolebindings -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CRB:
(Blegacy-script.sh:815: Successful get clusterroles -l test-cmd=auth {{range.items}}{{.metadata.name}}:{{end}}: testing-CR:
(BSuccessful
message:error: only rbac.authorization.k8s.io/v1 is supported: not *v1beta1.ClusterRole
has:only rbac.authorization.k8s.io/v1 is supported
rolebinding.rbac.authorization.k8s.io "testing-RB" deleted
role.rbac.authorization.k8s.io "testing-R" deleted
warning: deleting cluster-scoped resources, not scoped to the provided namespace
clusterrole.rbac.authorization.k8s.io "testing-CR" deleted
clusterrolebinding.rbac.authorization.k8s.io "testing-CRB" deleted
... skipping 20 lines ...
replicationcontroller/cassandra created
I0214 17:34:18.420020   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581701658-23136", Name:"cassandra", UID:"01727011-b3fe-4e76-9c9a-47a98f784b35", APIVersion:"v1", ResourceVersion:"3132", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-4hlz4
I0214 17:34:18.432465   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581701658-23136", Name:"cassandra", UID:"01727011-b3fe-4e76-9c9a-47a98f784b35", APIVersion:"v1", ResourceVersion:"3132", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-xvzv5
service/cassandra created
Waiting for Get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}} : expected: cassandra:cassandra:cassandra:cassandra::, got: cassandra:cassandra:cassandra:cassandra:

discovery.sh:91: FAIL!
Get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}
  Expected: cassandra:cassandra:cassandra:cassandra::
  Got:      cassandra:cassandra:cassandra:cassandra:
(B
55 /home/prow/go/src/k8s.io/kubernetes/hack/lib/test.sh
(B
discovery.sh:92: Successful get all -l'app=cassandra' {{range.items}}{{range .metadata.labels}}{{.}}:{{end}}{{end}}: cassandra:cassandra:cassandra:cassandra:
(Bpod "cassandra-4hlz4" deleted
I0214 17:34:19.028392   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581701658-23136", Name:"cassandra", UID:"01727011-b3fe-4e76-9c9a-47a98f784b35", APIVersion:"v1", ResourceVersion:"3138", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-gj9hw
pod "cassandra-xvzv5" deleted
I0214 17:34:19.038417   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicationController", Namespace:"namespace-1581701658-23136", Name:"cassandra", UID:"01727011-b3fe-4e76-9c9a-47a98f784b35", APIVersion:"v1", ResourceVersion:"3138", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: cassandra-v7z7n
replicationcontroller "cassandra" deleted
E0214 17:34:19.054377   55211 replica_set.go:535] sync "namespace-1581701658-23136/cassandra" failed with Operation cannot be fulfilled on replicationcontrollers "cassandra": StorageError: invalid object, Code: 4, Key: /registry/controllers/namespace-1581701658-23136/cassandra, ResourceVersion: 0, AdditionalErrorMsg: Precondition failed: UID in precondition: 01727011-b3fe-4e76-9c9a-47a98f784b35, UID in object meta: 
service "cassandra" deleted
+++ exit code: 0
Recording: run_kubectl_explain_tests
Running command: run_kubectl_explain_tests

+++ Running case: test-cmd.run_kubectl_explain_tests 
... skipping 351 lines ...
namespace-1581701648-3376    default   0         16s
namespace-1581701650-10578   default   0         14s
namespace-1581701658-23136   default   0         6s
some-other-random            default   0         7s
has:all-ns-test-2
namespace "all-ns-test-1" deleted
E0214 17:34:28.350479   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
namespace "all-ns-test-2" deleted
I0214 17:34:34.609058   55211 namespace_controller.go:185] Namespace has been deleted all-ns-test-1
get.sh:376: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: valid-pod:
(Bwarning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "valid-pod" force deleted
get.sh:380: Successful get pods {{range.items}}{{.metadata.name}}:{{end}}: 
... skipping 87 lines ...
has:valid-pod:
W0214 17:34:36.502723   89275 helpers.go:534] --dry-run is deprecated and can be replaced with --dry-run=client.
Successful
message:valid-pod:
has:valid-pod:
W0214 17:34:36.690772   89292 helpers.go:534] --dry-run is deprecated and can be replaced with --dry-run=client.
E0214 17:34:36.784584   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:valid-pod:
has:valid-pod:
W0214 17:34:36.886019   89309 helpers.go:534] --dry-run is deprecated and can be replaced with --dry-run=client.
Successful
message:scale-1:
... skipping 159 lines ...
Successful
message:deploy:
has:deploy:
Successful
message:deploy:
has:deploy:
E0214 17:34:42.049862   55211 reflector.go:178] k8s.io/client-go/metadata/metadatainformer/informer.go:89: Failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
Successful
message:deploy:
has:deploy:
Successful
message:deploy:
has:deploy:
... skipping 314 lines ...
message:node/127.0.0.1 already uncordoned (server dry run)
has:already uncordoned
node-management.sh:134: Successful get nodes 127.0.0.1 {{.spec.unschedulable}}: <no value>
(Bnode/127.0.0.1 labeled
node-management.sh:139: Successful get nodes 127.0.0.1 {{.metadata.labels.test}}: label
(BSuccessful
message:error: cannot specify both a node name and a --selector option
See 'kubectl drain -h' for help and examples
has:cannot specify both a node name
Successful
message:error: USAGE: cordon NODE [flags]
See 'kubectl cordon -h' for help and examples
has:error\: USAGE\: cordon NODE
node/127.0.0.1 already uncordoned
Successful
message:error: You must provide one or more resources by argument or filename.
Example resource specifications include:
   '-f rsrc.yaml'
   '--filename=rsrc.json'
   '<resource> <name>'
   '<resource>'
has:must provide one or more resources
... skipping 14 lines ...
+++ [0214 17:34:51] Testing kubectl plugins
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/version/kubectl-version
  - warning: kubectl-version overwrites existing command: "kubectl version"
error: one plugin warning was found
has:kubectl-version overwrites existing command: "kubectl version"
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo
  - warning: test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin: test/fixtures/pkg/kubectl/plugins/kubectl-foo
error: one plugin warning was found
has:test/fixtures/pkg/kubectl/plugins/foo/kubectl-foo is overshadowed by a similarly named plugin
Successful
message:The following compatible plugins are available:

test/fixtures/pkg/kubectl/plugins/kubectl-foo
has:plugins are available
Successful
message:Unable read directory "test/fixtures/pkg/kubectl/plugins/empty" from your PATH: open test/fixtures/pkg/kubectl/plugins/empty: no such file or directory. Skipping...
error: unable to find any kubectl plugins in your PATH
has:unable to find any kubectl plugins in your PATH
Successful
message:I am plugin foo
has:plugin foo
Successful
message:I am plugin bar called with args test/fixtures/pkg/kubectl/plugins/bar/kubectl-bar arg1
... skipping 10 lines ...

+++ Running case: test-cmd.run_impersonation_tests 
+++ working dir: /home/prow/go/src/k8s.io/kubernetes
+++ command: run_impersonation_tests
+++ [0214 17:34:52] Testing impersonation
Successful
message:error: requesting groups or user-extra for  without impersonating a user
has:without impersonating a user
certificatesigningrequest.certificates.k8s.io/foo created
authorization.sh:68: Successful get csr/foo {{.spec.username}}: user1
(Bauthorization.sh:69: Successful get csr/foo {{range .spec.groups}}{{.}}{{end}}: system:authenticated
(Bcertificatesigningrequest.certificates.k8s.io "foo" deleted
certificatesigningrequest.certificates.k8s.io/foo created
... skipping 33 lines ...
No resources found
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
I0214 17:34:56.211406   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701693-15839", Name:"test-1-6d98955cc9", UID:"d339a57e-c5e7-496c-bccc-60f265842a23", APIVersion:"apps/v1", ResourceVersion:"3338", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-1-6d98955cc9-hsn26
pod "test-1-6d98955cc9-dk75b" force deleted
pod "test-2-65897ff84d-p8b46" force deleted
I0214 17:34:56.224539   55211 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"namespace-1581701693-15839", Name:"test-2-65897ff84d", UID:"7577fe06-1517-442a-8d00-41bacdc745b7", APIVersion:"apps/v1", ResourceVersion:"3348", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: test-2-65897ff84d-g4dx8
E0214 17:34:56.224731   55211 replica_set.go:535] sync "namespace-1581701693-15839/test-1-6d98955cc9" failed with replicasets.apps "test-1-6d98955cc9" not found
E0214 17:34:56.228265   55211 replica_set.go:535] sync "namespace-1581701693-15839/test-2-65897ff84d" failed with replicasets.apps "test-2-65897ff84d" not found
+++ [0214 17:34:56] TESTS PASSED
I0214 17:34:56.261819   51768 controller.go:87] Shutting down OpenAPI AggregationController
I0214 17:34:56.261857   51768 controller.go:181] Shutting down kubernetes service endpoint reconciler
I0214 17:34:56.261929   51768 secure_serving.go:222] Stopped listening on 127.0.0.1:8080
I0214 17:34:56.262006   51768 controller.go:123] Shutting down OpenAPI controller
I0214 17:34:56.262013   51768 dynamic_serving_content.go:144] Shutting down serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key
... skipping 15 lines ...
I0214 17:34:56.262689   51768 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0214 17:34:56.262772   51768 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0214 17:34:56.262859   51768 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0214 17:34:56.263029   51768 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0214 17:34:56.263046   51768 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0214 17:34:56.263188   51768 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0214 17:34:56.263237   51768 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0214 17:34:56.263298   51768 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0214 17:34:56.263301   51768 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
W0214 17:34:56.263344   51768 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0214 17:34:56.263376   51768 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0214 17:34:56.263403   51768 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0214 17:34:56.263442   51768 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0214 17:34:56.263469   51768 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0214 17:34:56.263486   51768 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
I0214 17:34:56.263546   51768 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
I0214 17:34:56.263556   51768 clientconn.go:882] blockingPicker: the picked transport is not ready, loop back to repick
W0214 17:34:56.263612   51768 clientconn.go:1208] grpc: addrConn.createTransport failed to connect to {http://127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
E0214 17:34:56.263624   51768 controller.go:184] rpc error: code = Unavailable desc = transport is closing
junit report dir: /logs/artifacts
+++ [0214 17:34:56] Clean up complete
+ make test-integration
+++ [0214 17:35:00] Checking etcd is on PATH
/home/prow/go/src/k8s.io/kubernetes/third_party/etcd/etcd
+++ [0214 17:35:00] Starting etcd instance
... skipping 313 lines ...
    synthetic_master_test.go:722: UPDATE_NODE_APISERVER is not set

=== SKIP: test/integration/scheduler_perf TestSchedule100Node3KPods (0.00s)
    scheduler_test.go:73: Skipping because we want to run short tests


=== Failed
=== FAIL: test/integration/apiserver/apply TestApplyCRDStructuralSchema (9.24s)
I0214 17:36:28.929075  108477 controller.go:181] Shutting down kubernetes service endpoint reconciler
I0214 17:36:28.929278  108477 dynamic_cafile_content.go:181] Shutting down request-header::/tmp/kubernetes-kube-apiserver547317363/proxy-ca.crt
I0214 17:36:28.929303  108477 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
I0214 17:36:28.929324  108477 apiservice_controller.go:106] Shutting down APIServiceRegistrationController
I0214 17:36:28.929358  108477 controller.go:123] Shutting down OpenAPI controller
I0214 17:36:28.929390  108477 customresource_discovery_controller.go:220] Shutting down DiscoveryController
... skipping 231 lines ...
E0214 17:36:37.039049  108477 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /ce2a047a-59cc-41c6-90f9-6bff3664200d/registry/masterleases/127.0.0.1, ResourceVersion: 0, AdditionalErrorMsg: 
I0214 17:36:37.092928  108477 cache.go:39] Caches are synced for autoregister controller
I0214 17:36:37.092934  108477 cache.go:39] Caches are synced for AvailableConditionController controller
I0214 17:36:37.093561  108477 shared_informer.go:213] Caches are synced for crd-autoregister 
I0214 17:36:37.093625  108477 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0214 17:36:37.094672  108477 shared_informer.go:213] Caches are synced for cluster_authentication_trust_controller 
E0214 17:36:37.944711  108477 controller.go:184] an error on the server ("") has prevented the request from succeeding (get endpoints kubernetes)
I0214 17:36:37.991680  108477 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0214 17:36:37.991723  108477 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0214 17:36:38.017712  108477 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0214 17:36:38.022748  108477 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0214 17:36:38.022775  108477 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
W0214 17:36:38.079879  108477 lease.go:224] Resetting endpoints for master service "kubernetes" to [127.0.0.1]
... skipping 19 lines ...
    testserver.go:199: Waiting for /healthz to be ok...
    apply_crd_test.go:218: CustomResourceDefinition.apiextensions.k8s.io "noxus.mygroup.example.com" is invalid: [spec.validation.openAPIV3Schema.properties[spec].properties[ports].items.schema.properties[hostIP].default: Required value: default value must be set if key is not required, spec.validation.openAPIV3Schema.properties[spec].properties[ports].items.schema.properties[hostPort].default: Required value: default value must be set if key is not required, spec.validation.openAPIV3Schema.properties[spec].properties[ports].items.schema.properties[name].default: Required value: default value must be set if key is not required, spec.validation.openAPIV3Schema.properties[spec].properties[ports].items.schema.properties[protocol].default: Required value: default value must be set if key is not required, spec.validation.openAPIV3Schema.properties[spec].properties[ports].items.schema.properties[protocol].nullable: Forbidden: key cannot be nullable]


DONE 2358 tests, 4 skipped, 1 failure in 6.164s
+++ [0214 17:46:40] Saved JUnit XML test report to /logs/artifacts/junit_20200214-173506.xml
make[1]: *** [Makefile:185: test] Error 1
!!! [0214 17:46:41] Call tree:
!!! [0214 17:46:41]  1: hack/make-rules/test-integration.sh:97 runTests(...)
+++ [0214 17:46:41] Cleaning up etcd
+++ [0214 17:46:41] Integration test cleanup complete
make: *** [Makefile:204: test-integration] Error 1
+ EXIT_VALUE=2
+ set +o xtrace
Cleaning up after docker in docker.
================================================================================
[Barnacle] 2020/02/14 17:46:41 Cleaning up Docker data root...
[Barnacle] 2020/02/14 17:46:41 Removing all containers.
[Barnacle] 2020/02/14 17:46:41 Failed to list containers: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40
[Barnacle] 2020/02/14 17:46:41 Removing recently created images.
[Barnacle] 2020/02/14 17:46:41 Failed to list images: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40
[Barnacle] 2020/02/14 17:46:41 Pruning dangling images.
[Barnacle] 2020/02/14 17:46:41 Failed to list images: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40
[Barnacle] 2020/02/14 17:46:41 Pruning volumes.
[Barnacle] 2020/02/14 17:46:41 Failed to prune volumes: Error response from daemon: client version 1.41 is too new. Maximum supported API version is 1.40
[Barnacle] 2020/02/14 17:46:41 Done cleaning up Docker data root.
Remaining docker images and volumes are:
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
DRIVER              VOLUME NAME
Cleaning up binfmt_misc ...
================================================================================
... skipping 2 lines ...